Ronin Introduces 'Kaidro' RPG After Launching 'Pixels' Ethereum Token
Sky Mavis announces Kaidro, an anime-style RPG on Ronin blockchain, featuring collaborative gameplay, a planned TV series, and Web3 fan engagement through NFT ownership.

Author: MD
Production: Bright Company
Recently, the well-known American podcast Invest Like the Best once again interviewed Marc Andreessen, co-founder of Andreessen Horowitz. In the interview, Marc and host Patrick deeply discussed the major changes that AI is reshaping technology and geopolitics, and discussed DeepSeek's open source artificial intelligence and its significance in the technological competition among major powers. In addition, they also shared their views on the evolution of the global power structure and the overall transformation of the venture capital industry.
"Bright Company" used AI tools to organize the core content of the interview as soon as possible. For the full text, please see the "Original Link" at the end of the article.
The following is the interview content (abridged):
Patrick: Marc, I think we have to start with the core question. Can you talk about your views on DeepSeek's R1?
Marc: There are many dimensions to this. (I think) the United States is still the recognized scientific and technological leader in the field of artificial intelligence. Most of the ideas in DeepSeek are derived from work done in the United States or Europe in the past 20 years, or even surprisingly 80 years ago. The initial research on neural networks was carried out as early as the 1940s in American and European research universities.
So from the perspective of knowledge development, the United States is still far ahead.
But DeepSeek has done a very good job of applying this knowledge. They also did a remarkable thing, which was to make it available to the world in open source form. It's actually quite amazing because there's a reversal of that. You have companies like OpenAI in the US, which are basically completely closed. Part of Elon Musk's lawsuit against OpenAI is asking them to change the name of the company from OpenAI to Closed AI. OpenAI's original vision was that everything would be open source, but now everything is closed. Other large AI labs, like Anthropic, are also completely closed. In fact, they've even stopped publishing research papers and treat everything as proprietary. And the DeepSeek team, for their own reasons, actually lived up to the promise of true open source. They released the code for their LLM (called V3) and their reasoner (called R1), and they released a detailed technical paper that explains how they built it, which basically provides a roadmap for anyone else who wants to do similar work. So it's already public. There's a false narrative out there that if you use DeepSeek, you're giving all your data to the Chinese. That's true if you use the service on the DeepSeek website. But you can download the code and run it yourself. But let me give you an example: Perplexity is a US company, and you can use DeepSeek R1 on Perplexity, which is completely hosted in the US. Microsoft and Amazon now have cloud versions of DeepSeek that you can run on their cloud platforms, and obviously both of these companies are US companies, using US data centers.
This is very important. You can download this system nowand actually run it on $6,000 worth of hardware at home or at work. It's comparable in power to the most cutting-edge systems from companies like OpenAI and Anthropic.
These companies invested a lot of money to build their systems. Today, you can buy it for $6,000 and have complete control. If you run it yourself, you have complete control. You have full transparency into what it's doing, you can modify it, you can do all kinds of things with it.
It also has a really cool feature called distillation. You can take a large model that requires $6,000 of hardware and compress it down to create smaller versions of the model. There are people out there who have created smaller versions of the model that are optimized so that you can run them on a MacBook or an iPhone. They're not as smart as the full version, but they're still pretty smart. You can create customized, domain-specific, distilled versions that are really good at doing specific things.
This is a huge step forward in making reasoning with large models, and reasoning with R1 models in programming, in science, much more accessible. Six months ago, this was very esoteric, extremely expensive, and proprietary. Now it's free and available to everyone forever.
Every big tech company, internet company, every startup, and we have dozens if not hundreds of startups this week, are either rebuilding on DeepSeek, integrating it into their products, or looking at the technology they used and using it to improve existing AI systems.
Mark Zuckerberg from the Meta team recently talked about how the Meta team is tearing down DeepSeek, borrowing ideas completely legally because it's open source, and making sure that the next version of Llama is at least as good as DeepSeek in reasoning power, or better. This really moves the world forward.
The two main points we can learn from this are: AI is going to be everywhere. There are a lot of AI risk people, security people, regulators, officials, governments, the EU, the British, etc., all of whom want to restrict and control AI, and this basically guarantees that that won't happen, which I think is great. It's very much in the free tradition of the Internet. And then this achieves a 30x cost reduction in reasoning power.
Perhaps the last thing to point out is that this shows that reasoning will work. Reasoning will work in any area of human activity as long as you can generate answers that can be checked by technical experts after the fact for correctness.
We will have AI that can do human and superhuman level reasoning, and this will work in the areas that really matter: coding, mathematics, physics, chemistry, biology, economics, finance, law, and medicine.
This basically guarantees that within five years every single person on the planet will have a superhuman level AI lawyer, AI doctor, who is always on call, just as a standard feature on their phone. This will make the world a better, healthier, and more amazing place.
Patrick: But this is also the most volatile, models are outdated in two months. There is a lot of innovation happening at every level of technology. But just looking at this point in time, moving into this new paradigm, if you were writing a column about the winners and losers of all stakeholders, whether it's new application developers, existing software developers, infrastructure providers like Nvidia, open source vs. closed source model companies. Who do you think are the winners and losers after the release of R1?
Marc:If you take a "snapshot" today, then from a zero-sum game perspective, the winners and losers at a point in time are all the users, all the consumers, every individual, and every business that uses AI.
There are startups, like companies doing AI legal services, where it cost them 30 times more to use AI last week than it does now.
For example, for a company that's building an AI lawyer, if the cost of its key inputs drops 30 times, that's like the cost of gasoline dropping 30 times when you're driving. All of a sudden you can drive 30 times farther with the same dollar, or you can use the extra spending power to buy more stuff. All of these companies are either going to greatly expand their ability to use AI in all of these areas, or they're going to be able to provide services cheaper or for free. So it's a fantastic outcome on a fixed-sized plate for the users, for the world.
The losers are the companies that have proprietary models, like OpenAI, Anthropic, and so on. You'll notice that both OpenAI and Anthropic have sent out pretty strong, provocative messages over the past week about why this isn't the end of them. There's an old saying in business and politics that when you're explaining, you're losing.
Then the other one is Nvidia. There's a lot of commentary on this, but Nvidia makes the standard AI chip that people use. There are some other options, but Nvidia is what most people use. The profit margins on their chips are like 90%, and the company's stock price reflects that. [Nvidia] is one of the most valuable companies in the world. One of the things that the DeepSeek team did in their paper is they figured out how to use cheaper chips, still actually using Nvidia chips, but they use them much more efficiently. Part of the 30x cost reduction is that you just need fewer chips. And by the way, China is building out its own chip supply chain, and some companies are starting to use Chinese-derived chips, which of course is a more fundamental threat to Nvidia. So this is a snapshot at a point in time. But the thing is, your question suggests another way to look at it, which is that over time, what you want to see is an elastic effect over time. Satya Nadella used this phrase called the Jevons paradox. Imagine gasoline. If the price of gasoline drops dramatically, then all of a sudden people drive more cars. This comes up a lot in transportation planning. So you have a city like Austin, which has a lot of traffic, and somebody suddenly has the idea to build a new highway next to the existing highway. And in just two years, the new highways will be clogged up, too, and maybe even harder to get from one place to another. The reason is that lower prices on key inputs can induce demand.
If AI suddenly becomes 30 times cheaper, people might use it 30 times more, or, by the way, they might use it 100 times or even 1,000 times more. The economic term for this is called elasticity.
So falling prices equals explosive growth in demand. I think there's a very plausible scenario here, which is that on the other side, as usage explodes, DeepSeek will do very well. And by the way, OpenAI, Anthropic will do very well, Nvidia will do very well, the chipmakers in China will do very well.
And then you'll see a tidal wave effect where the whole industry will explode. We're really just at the beginning of people figuring out how to use these technologies. The reasoning has only started to work in the last four months. OpenAI just released their o1 inference model a few months ago. It's like taking fire off the mountain and giving it to all of humanity. And most of humanity doesn't use fire yet, but they will. And then, frankly, it's also an old idea of creativity, which is, well, if you're OpenAI or any other company like that, what you did last week is no longer good enough. But then again, that's the way of the world. You have to get better. These things are races. You have to evolve. And so it's also been a very powerful catalyst for a lot of incumbent companies to really up their game and become more aggressive. Patrick: …, it's a very hard thing to understand if a Chinese company is using models that were developed in the United States that were heavily invested in and then led to this technology that's enriched the world. I'd love to hear your reactions from both perspectives. Marc: Yeah, so there are some real issues here. There’s an irony to this argument, and you do hear it. The irony, of course, is that OpenAI didn’t invent the Transformer. The core algorithm for large language models is called the Transformer.
It wasn’t invented at OpenAI, it was invented at Google. Google invented it, published a paper on it, and then, by the way, they didn’t productize it. They continued to work on it, but they didn’t productize it because they thought it might be unsafe for “safety” reasons. So they let it sit on the shelf for five years, and then the OpenAI team figured it out, picked it up, and moved on.
Anthropic was a fork of OpenAI. Anthropic didn’t invent the Transformer either. So both of these companies, and every other American lab that’s working on large language models, and every other open source project, are building on things that they didn’t create and develop themselves.
By the way, Google invented the Transformer in 2017, but the Transformer itself is based on the idea of a neural network. The idea of a neural network goes back to 1943. So, 82 years ago is actually when the original neural network paper was published, and the Transformer was built on 70 years of research and development, much of which was funded by the federal government and European governments in research universities.
So it's a very long lineage of intellectual ideas and development, and most of the ideas that went into all of these systems were not developed by the companies that are currently building them. No company sits here, including our own, without any particular moral claim that we built this from scratch and we should have complete control. That's simply not true.
So, I would say that arguments like these are made out of frustration in the moment. And by the way, these arguments are also meaningless because China has already done it, it's already come out, it's already happened. There's a debate about copyright right now. If you talk to experts in the field, a lot of people have been trying to understand why DeepSeek is so good. One of the theories, and this is an unproven theory, but one that experts believe in is that the Chinese company may have used data for training that the American company didn't use.
What's particularly surprising is that DeepSeek is so good at creative writing. DeepSeek is probably the best AI in the world right now for creative writing in English. That's a little bit strange because the official language of China is Chinese. There are some really good Chinese novelists in English, but generally speaking, you might think that the best creative writing would come from the West. And DeepSeek is probably the best right now, which is shocking.
So one of the theories is that DeepSeek may have been trained. For example, there are websites called Libgen, and these are basically giant internet repositories full of pirated books. I certainly don't use Libgen myself, but I have a friend who uses it a lot.It's like a superset of the Kindle store. It has every digital book, in PDF format, that you can download for free. It's like the movie version of The Pirate Bay.
American labs might not feel like they can simply download all the books from Libgen and train on them, but maybe Chinese labs feel like they can. So there might be this differential advantage. That being said, there's an unresolved copyright battle here. People need to be careful about this because there's an unresolved copyright battle here where some publishing companies basically want to prevent generative AI companies like OpenAI, Anthropic, and DeepSeek from being able to use their content.
There's an argument that says that material is copyrighted and can't be used as you please. There's another argument that basically says that AI trains on books, you're not copying books, you're reading books. It's legal for AI to read books.
You and I are allowed to read books, by the way. We can borrow books from libraries. We can pick up friends' books. These actions are all legal. We are allowed to read books, we are allowed to learn from books, and then we can go on with our daily lives and talk about the ideas we learned in the books. Another argument is that training AI is more like a human reading a book, not stealing it.
And then there's the practical reality that if ... their AI can be trained on all the books, and if American companies end up being legally prohibited from training on books, then the United States might lose the race in AI.
From a practical perspective, that could be a death blow, like they win and we lose. There might be some kinks in this whole argument. DeepSeek doesn't disclose the data that they train on. So when you download DeepSeek, you don't get the training data, you get what are called weights. So you get a neural network that's been trained on the training material. But it's very difficult to impossible to look at that and deduce the training data.
By the way, Anthropic and OpenAI also don't disclose the data that they train on. And then there's intense speculation in the field about what is and isn't in the OpenAI training data. They consider it a trade secret. They won't disclose it. So the Chinese DeepSeek may or may not be different from these companies. They may be different from the Chinese companies in terms of training methods. We don't know.
We don't know exactly what the OpenAI and Anthropic algorithms are because they are not open source. We don't know how much better or worse they are than the public DeepSeek algorithm.
Patrick:Do you think that those closed source models that enter the competition, like OpenAI, Anthropic, will eventually become more like Apple and Google's Android?
Marc:I support maximizing competition.By the way, this fits with my identity as a venture capitalist. So, if you are a company founder, if I run an AI company, I need to have a very specific strategy that has advantages and disadvantages and requires trade-offs.
As a venture capitalist, I don't have to do that. I can make multiple contradictory bets. This is what Peter Thiel calls deterministic optimism versus non-deterministic optimism. The founder, the CEO, has to be a deterministic optimist. They have to have a plan, and they have to make difficult trade-offs to achieve that plan. Venture capitalists are non-deterministic optimists. We can fund a hundred companies with a hundred different plans, contradictory assumptions.
The nature of my job is that I don't have to make the kind of choices that you just described. And then, that makes it easy for me to make a philosophical argument that I personally agree with sincerely, which is that I support maximum competition. So, going one level deeper, that means that I support free markets, maximum competition, and maximum freedom.
Essentially, if you can have as many smart people as possible come up with as many different approaches as possible and compete with each other in the free market, see what happens. Specifically for AI, that means I support the big labs growing as fast as they can.
I support OpenAI and Anthropic 100% doing whatever they want to do, launching whatever products they want to launch, and growing as hard as they can. As long as they don't get preferential policy treatment, subsidies, or support from the government, they should be able to do whatever they want as a company.
Of course, I support startups as well. We're certainly actively funding AI startups of all sizes and types. So, I want them to grow, and then I want open source to grow, in part because I think if things are in open source, even if it means that some companies with business models can't work, the benefits to the world and the industry as a whole are so great that we'll find other ways to make money. AI will become more ubiquitous, cheaper, and more accessible. I think that will be a great outcome.
And then another very critical reason for open source is that without open source, everything becomes a black box. Without open source, everything becomes a black box owned and controlled by a handful of companies that could end up colluding with the government, and we can have a discussion about that. But you need open source to be able to see what's going on inside the box.
By the way, you also need open source for academic research, so you need open source for teaching. So the problem before open source was, go back two years ago, when there were no basic open source LLMs, Meta released Llama, then Mistral in France, and now DeepSeek.
But before these open source models came along, there was a crisis going on in the university system, where university researchers at places like Stanford, MIT, and Berkeley didn't have enough funding to buy billion-dollar Nvidia chips in order to really compete in the AI field.
So if you talked to computer science professors two years ago, they were very worried. The first concern was that my university didn't have enough funding to compete in the AI field and stay relevant. And then the other concern is that all the universities combined don't have enough money to compete, because nobody can keep up with the funding capabilities of these large companies.
Open source puts the universities back into competition. What that means is that if I'm a professor at Stanford, MIT, Berkeley, or any state school, whether it's the University of Washington or somewhere else, I can now teach using the Llama code, the Mistral code, or the DeepSeek code. I can do research, I can actually make breakthroughs. I can publish my research and people can actually understand what's going on.
Then every new generation of kids that come to college and take a computer science class will be able to learn how to do this, whereas they wouldn't be able to do it if it was a black box. We need open source just as much as we need free speech, academic freedom, and research freedom.
So my model is basically, you have big companies, small companies, and open source competing against each other. That's what happened in the computer industry. It worked well. That's what happened in the Internet industry. It worked well. I believe that will happen in AI, and I think it will work very well.
Patrick: Is there a limit to wanting maximum evolutionary rate and maximum competition? Maybe there is. If I say, we know that the best stuff is made in China, ..., is there a situation where you say, yes, I want maximum evolution and competition, but the national interest somehow overrides the desire for maximum evolutionary rate and development?
Marc: This argument is a very real argument. It's been made frequently in the AI space. In fact, as we sit here today, there are two things. First, there are actually restrictions on Western companies and American companies selling cutting-edge AI chips to China. For example, Nvidia today cannot actually legally sell its cutting-edge AI chips to China. We live in a world where this decision has been made and this policy has been implemented.
Then the Biden administration had issued an executive order, which I think has now been rescinded, but they had issued an executive order that would have imposed similar restrictions on software. That's a very active debate. There's another round of that going on in Washington, D.C., with the DeepSeek incident.
And then basically, when you get into policy debates, you have a classic situation where you have a rational version of the argument, which is what is in the national interest from a theoretical perspective.And then you have a political version of the argument, which is, well, what does the political process actually do to the rational argument? Let me put it this way, we all have a lot of experience watching the rational argument meet the political process, and it's usually not the rational argument that wins. It's processed through the political machine, and what comes out is usually not what you initially thought you were going to get.
And then there's a third factor that we always need to talk about, which is the corrupt influence of particularly large corporations. If you're a large company and you see the changes that are happening with Chinese companies, the threat of what's happening with open source, of course you're going to try to use the U.S. government to protect yourself. Maybe it's in the national interest, maybe it's not. But you're certainly going to push for it, whether it's in the national interest or not. That's what makes this debate complicated. You can't sell cutting-edge AI chips to China. It certainly hinders them in some ways. There are things they won't be able to do. Maybe that's a good thing because you've decided it's in the national interest. But let's look at three other interesting consequences that come out of this. So one of the consequences is that it gives Chinese companies a huge incentive to design how to do things on cheaper chips. That was a big part of the DeepSeek breakthrough, which was that they figured out how to use a legal, cheaper chip to do things that American companies can do with bigger chips. That's one reason it's so cheap. One of the reasons you can run it on $6,000 worth of hardware is because they've invested a lot of time and effort into optimizing the code to run efficiently on cheaper chips that aren't sanctioned. You've forced an evolutionary response.
So that's the first response, and maybe it's already backfired in some ways. The second consequence is that you've incentivized the Chinese state and private sector to develop a parallel chip industry. So if they know they can't get American chips, then they're going to develop it. They're doing that now. They have a national program to build their own chip industry so that they're not dependent on American chips.
So from a counterfactual perspective, maybe they'll buy American chips. Now they're going to figure out how to make them themselves. Maybe in five years they'll be able to do that. But once they get to a position where they can make them themselves, then we'll have a direct competitor in the global market that we wouldn't have had if we just sold them chips. And by the way, at that point, we won't have any control over their chips. They have full control. They can sell below cost, they can do whatever they want.
Patrick: How do you think all of this will affect capital allocation? I'm most interested in how your firm, Andreessen Horowitz (A16Z), will be affected, maybe five years from now. If I think of investment firms as a combination of being able to raise capital, doing great analytical work, and being able to judge people, especially at an early stage, how do you think that function will change with the advent of "o7" (AI reasoning power)?
Marc: I would expect the analytical part to change dramatically. We assume that the best investment firms in the world are going to be very good at using this technology to do the analytical work that they do.
That being said, there is a saying that the cobbler’s son has no shoes, and that maybe the venture capital firms that are investing most aggressively in AI are among those that are not aggressive enough in terms of real-world applications. But there are multiple efforts going on within our firm, and I’m very excited about that. But firms like ours need to keep up, so we have to really get to the point.
Is there some work going on within the industry? Probably not yet. Probably not enough. Having said that, a lot of the people we talk to have a very analytical perspective when it comes to late-stage investing or public market investing. Even the great investor, I think it’s Warren Buffett. I don’t know if that’s true, but I’ve always heard that Warren never meets with CEOs.
Patrick: He wants “ham sandwich companies.”
Marc: Yes, yes, he wants companies to be as simple as ham sandwiches. And I think he's a little bit worried about being sucked in by a good story. You know, a lot of CEOs are very charismatic people. They're always described as "great hair, white teeth, polished shoes, and a well-tailored suit." They're very good at sales. You know, one of the things that CEOs are good at is sales, especially selling their own stock.
So if you're Buffett and you're sitting in Omaha, what you do is read the annual report. Companies list everything in their annual report, and they're bound by federal law to make sure it's true. So that's how you analyze it. So do inference models like o1, o3, o7, or R4 do a better job than most investors analyzing annual reports by hand? Probably.
As you know, investing is an arms race, just like everything else. So if it works for one person, it's going to work for everyone. It's going to be an arbitrage opportunity for a while, and then it's going to close and become the standard. So I expect the investment management industry to adopt this technology in this way. It's going to become a standard way of operating.
I think for early stage venture it's a little different. What I'm about to say is probably just wishful thinking on my part. I could be the last Japanese soldier on a remote island in 1948 saying what I'm about to say. I'm going to go out on a limb. But I will say this, look, in the early stage, a lot of what we do in the first five years is actually really deeply assess individuals and then work very deeply with those people.
This is also why venture is so hard to scale, especially [across] geography. Geographic scale experiments often don't work. And the reason for that is that you end up spending a lot of time face to face with these people, not only in the assessment process, but also in the building process. Because in the first five years, these companies are usually not on autopilot yet.
You actually need to work closely with them to make sure that they can achieve everything that they need to be successful. There are very deep relationships, conversations, interactions, mentorships, and by the way, we learn from them and they learn from us. It's a two-way exchange. We don't have all the answers, but we have a perspective because we see the bigger picture, and they're more focused on the specifics. So there's a lot of two-way interaction. Tyler Cowen talked about this, I think he called it "project cherry-picking." Of course, "talent scouting" is another version of that, which is basically, if you look back at any new field in human history, you almost always find this phenomenon, where there are some unique personalities trying to do something new, and then there are some professional support layers who fund and support them. In the music industry, it was David Geffen who discovered all the early folk artists and made them into rock stars. Or in the movie industry, it was David O. Selznick who discovered the early movie actors and made them into movie stars. Or maybe it was in a coffee house, a tavern in Maine 500 years ago, and there was a discussion about which whaling captain was going to go get the whale.
You know, this is Queen Isabella in the palace listening to Columbus's proposal and saying, "That sounds reasonable. Why not?" This alchemy that develops over time, this alchemy that develops between people who are doing new things and the professional support layer that supports and funds these people, has been going on for hundreds, even thousands of years.
You might have seen tribal leaders thousands of years ago, they were sitting around the fire, and the young warrior came up and said, "I want to lead a hunting party to that area over there to see if there's better prey over there." And the leader was sitting around the fire trying to decide whether to agree.So it's a very human interaction. My guess is that this interaction will continue. Of course, having said that, if I ever met an algorithm that was better at doing this than I am, I would retire immediately.We'll see.
Patrick: You're building one of the largest companies in this space. How have you adapted your company's growth strategy to deal with this new technology? Have you adapted it, both practically and strategically? How have you adapted your company's growth direction to deal with this new technology?
Marc: A big part of running a venture capital firm, in our view, is there is a set of values and behaviors that you have to have, which we call timeless. For example, respect for entrepreneurs. You need to have a lot of respect for entrepreneurs and the journey that they've been on. You need to deeply understand what they do. You can't just go through the motions.
You build deep relationships. You work with these people for the long term, and by the way, these companies take a long time to build. We don't believe in overnight success. Most of the great companies are built over a 10, 20, 30-year time span. Nvidia is a great example of this. Nvidia is coming up on its 40th anniversary, and I think one of the original VCs at Nvidia is actually still on the board today. That's a great example of long-term building.
So there's a core set of beliefs and perspectives and behaviors that we don't change, and those are related to what we just mentioned. The other one is the face-to-face interaction thing. You know, these things can't be done remotely, that's one. But the other side of it is you need to stay current because technology changes so quickly, business models change so quickly, competitive dynamics change so quickly.
If anything, the environment has become more complex because you have so many countries now, and you have all these political issues now, which also make things more complex. We used to never really worry about the political system putting pressure on our investments until about eight years ago. And then about five years ago, that pressure really intensified. But in the first ten years of our company, and the first 60 years of venture capital, it was never a big thing, but now it is.
So we need to adapt. We need to engage in politics, which we didn't do before. Now we need to adapt, we need to figure out that maybe AI companies are going to be very fundamentally different. Maybe they're going to be organized completely differently. Or as you said, maybe software companies are going to operate completely differently.
One of the questions we ask ourselves a lot is, for example, what is the organizational structure of a company that really leverages AI? Is it going to be similar to existing organizational structures, or is it actually going to be very different? There's no single answer to that, but we're thinking about it a lot.
So one of the delicate balancing acts that we do every day is try to figure out what's timeless and what's relevant. That's a big part of how I think about the company conceptually, that we need to navigate between those two and make sure we can differentiate between them.
Patrick: Your company is now very large, and it's similar in some ways to a company like KKR or Blackstone. You and Ben [Ben Horowitz] were both experienced founders when you started this company. Similar to Blackstone, Schwarzman had never really invested before he started Blackstone. Look at how it has evolved. It seems like this founder-led approach to building asset management investment firms, they eventually grow into really large and ubiquitous platforms. You have vertical businesses that cover most of the exciting frontiers of technology. Do you think there is some truth to that view? Will the best capital allocation platforms be founded more by founders than by investors? Marc: Yeah, so there are a few points. First of all, I think there is some truth to this observation. In the industry, people often talk about this, that a lot of investment operations are often called partnerships. A lot of venture capital firms operate this way. Historically, it was a small group of people sitting in a room, bouncing ideas off each other, and then making investments. By the way, they don't have a balance sheet. It's a private partnership. They pay money at the end of each year in the form of compensation. That's the traditional venture capital model.
A traditional venture capital model, you have six general partners (GPs) sitting around a table and running this operation. They have their own assistants, and they have a couple of assistants. But the point is, it's completely based on people. And by the way, you actually find that in most cases, people don't like each other very much.
Mad Men shows this very well. Remember in Mad Men, in season 3 or 4, the members left to start their own companies, and they actually didn't like each other. They knew they needed to get together and start a company. That's how a lot of companies work. So, it's a private partnership, and it's what it represents.
But then what you see is that these companies are very difficult to sustain. They have no brand value. They have no underlying enterprise value. They are not a business. What you see in this model of companies is that when the original partners are ready to retire or do something else, they hand it over to the next generation. Most of the time, the next generation can't continue to sustain it. Even if they can sustain it, there's no underlying asset value. The next generation is going to have to hand it off to the third generation. It might fail in the third generation, and then it'll end up on Wikipedia. It'll be like, "Yeah, this company existed, and then it disappeared, and other companies took its place, like ships passing by in the night."
So that's the traditional way it works. By the way, if you're trained in traditional investing, you're trained in the investing part, but you're never trained in how to build a business. So, it's not your natural forte, you don't have that skill or experience, so you're not going to do it. Many investors have operated that way for a long time as investors and made a lot of money. So, it can work very well.
The other way is to build a company, build a business, build something that has enduring brand value. You mentioned companies like Blackstone and KKR, these huge public companies. The same thing with Apollo, these huge companies - you probably know that the original banks were actually private partnerships. Goldman Sachs and JPMorgan Chase 100 years ago were more like small venture capital firms than they are today. But then their leaders over time transformed them into these huge businesses. They are also large public companies.
So, that's another way, is to build a franchise. Now, to do that, you need a theory of why a franchise should exist. You need a conceptual theory of why it makes sense to do it. And then, yes, you need business skills. And then, by the time you get to the point where you're running a business, it's like running any other business, which is, okay, I have a company. It has an operating model, it has an operating rhythm, it has management capabilities, it has employees, it has multiple layers, it has internal divisions of labor and specialization.
Then you start thinking about expansion, and then over time you start thinking about the underlying asset value, that the value of this thing is not just the people who are there at the moment. It's not like we're eager to distribute profits, or whatever. But one of the big things we're trying to do is to build something that has this kind of durability.
By the way, we're not in a rush to go public or anything, but one of the big things we're trying to do is build something that has this kind of durability.
Patrick: What new and different things do you hope the company will be in 10 years that don't exist yet? Are there some uncompromising ways that you hope the company will never evolve like a traditional large asset manager?
Marc: We evolve rapidly in what we invest in, what the company does, the model, and the background of the founders, and these things are always changing. For example, there has been a consensus in the venture capital community for 60 years that you would never support a researcher starting a company to do research. He would just do research, run out of money, and you would end up with nothing.
Yet many of the top AI companies today were founded by researchers. This is an example of how some so-called "timeless" values need to be adjusted to the changing times. We need to be highly flexible to these changes. So with those changes, the help and support that the firm needs to be successful will change.
One of the most significant changes in our firm, and I mentioned it before, is that we now have a large and increasingly sophisticated political operations department. Four years ago, we had no political presence. Today, it has become a significant part of our business that we never expected.
I am sure that in another 10 years, we will not only be investing in areas that we cannot imagine today, but we will have operating models that we cannot imagine today. So we are completely open to changes in these areas. However, there are some core values that I hope will remain unchanged in the next 10 years because they are well thought out and the foundation of our firm.
But what I always emphasize to our team members and limited partners is that we are not pursuing scale for scale's sake. Many investment firms, when they reach a certain scale, prioritize expanding their assets under management from billions to hundreds of billions or even trillions of dollars. This approach is often criticized as being more focused on collecting management fees than on outperforming investments. That is not our goal.
The only reason we scale is to support the companies we want to help founders build. When we scale, it is because we believe it helps us achieve that goal.
However, I must emphasize that the core of our firm will always be early-stage venture capital. No matter how big we get, even if we set up a growth fund and are able to write larger checks - some AI companies do need a lot of money. We did not set up a growth fund from the beginning, but gradually built it as the market demand and companies grew.
But the core business will always be early-stage venture capital. This can be confusing because from the outside, we manage a lot of money. Why would I, as a founder of an early-stage startup, trust you to spend your time with me when you, Andreessen Horowitz, have invested hundreds of millions of dollars in later-stage investments, and you only invested $5 million in my Series A round. Will you still spend time with me?
The reason is that the core of our firm will always be early-stage venture capital. From a financial perspective, the return opportunities for early-stage investments are comparable to those for later-stage companies, which is the characteristic of startups. But more importantly, all of our knowledge, relationships, and what makes our firm unique comes from the deep insights and connections we have in the early stages.
So, I always tell people that if the situation forces us to make sacrifices and the world is in trouble and we have to make sacrifices, the early-stage venture capital business will never be sacrificed. It will always be the core of the firm. This is also why I spend a lot of time working with early-stage founders. On the one hand, it's very interesting; on the other hand, this is also where the most learning is.
Patrick: If we think about the shifting global power structures, ..., which centers of power are you most concerned about shifting, either in terms of gaining power or losing power?
Marc: The Machiavellians. I'm sure you've probably had a dozen people recommend this book on your show. It's one of the greatest books of the 20th century. It lays out a theory of political power, social and cultural power. There's a key idea in the book that I'm seeing everywhere right now, which is the idea of elites and anti-elites.
The idea is this: Basically, democracy itself is a myth. You're never going to have a completely democratic society. And by the way, the United States is certainly not a democracy, it's a republic. But even those “democratic” systems that work well, they tend to have a republican quality, lowercase “r” republican. They tend to have a parliament, or they have a House of Representatives and a Senate, or they have some kind of representative institutions. They tend to have a representative institution.
The reason for this is a phenomenon described in the book called the “Iron Law of Oligarchy,” which is basically this: The problem with direct democracy is that the masses can’t organize. You can’t really get 350 million people to organize to do anything. There are just too many people.
So, in basically every political system in human history, you have a small, organized elite governing a large, unorganized mass. You start with the earliest hunter-gatherer tribes all the way up to the United States and every other political system in the modern era, whether it’s the Greeks or the Romans or every empire, every country in history.
So, a small, organized elite governing a large, unorganized mass. This relationship is fraught with danger because the unorganized masses will defer to the elites for a time, but not necessarily forever. If the elites become oppressive to the masses, the masses far outnumber the elites. At some point, they may show up with torches and spears. So, there is tension in this relationship. Many revolutions have happened because the masses decided that the elites no longer represent them.
Our society is no exception. We have a large, unorganized mass class. We have a very small, organized elite class. The United States…has set up a system where we have two elites. We have a Democratic elite class and a Republican elite class. By the way, there is a large overlap between these two elite classes, and some people actually call it a "single party." Perhaps these elite classes have more in common with each other than they do with the masses.
For a long time, we had a Republican elite class whose policies were ultimately represented by the Bush family. We had a Democratic elite class whose policies were ultimately represented by Obama. There has been a rebellion within the elites, basically on both sides of the aisle in the United States, over the last decade. This is actually the key point in The Machiavellian, that change is not usually about the masses going directly against the elites. What happens is the emergence of a new anti-elite. You have a new anti-elite emerging that tries to replace the current elite. My reading of current affairs is that, generally speaking, the elites that are currently running the world are being found to be doing a poor job. We can get into why later. But generally speaking, if you look at the approval ratings of political leaders, the approval ratings of institutions, all of that is going down. What's happening everywhere in the world is that if you're an incumbent institution, if you're an incumbent newspaper, if you're an incumbent television network, if you're an incumbent university, if you're an incumbent government, generally speaking, your approval ratings are a disaster. That's what people are basically saying, the elites in power are failing us.
Then there are these anti-elites who say, "Oh, I know I have a better way to represent the masses, I have a better way to take over." My new anti-elite movement is supposed to replace the current elite movement, like what's happening in the Democratic Party. That was Bernie Sanders in 2016, it was AOC and the whole progressive wave. And on the Republican side, it's obviously Trump and his MAGA movement and everything that it stands for.
But by the way, this dynamic is also happening in the UK. The Conservative Party has collapsed, and now you have this Reform Party, you have Nigel Farage, who is very threatening. You have Jeremy Corbyn, who is also an anti-elite from the left.
It's the same in Germany. In fact, just this week, something very dramatic happened in Germany, which is the so-called "far right" party, the AfD, is rising rapidly. There's a leader named Alice Weidel, and for the first time in German political history, in 50 years or more, the Christian Democratic Union (CDU) of Germany actually cooperated with the AfD on something. All of a sudden, the AfD became a viable competitor. They're an anti-elite trying to take over the right wing of the German political system.
So basically, wherever you go in the world, there's an anti-elite showing up and saying, "Oh, I can do better." It's a fight between elites. The masses are aware of it, they're watching the democratic society, and they ultimately make the decision because they decide who they're going to vote for.
That's why Republican voters decided that they were going to vote for Trump instead of Jeb Bush. It's a case of the anti-elite beating the elite. This actually ties into the criticism of Trump, which is very interesting, which is that Trump has been criticized by the existing elites, saying, "Oh, he's not the man of the people. He's a super-rich billionaire who lives in a golden penthouse and has people driving him everywhere. If you're a rural farmer in Kentucky or Wisconsin, you shouldn't think he's your man." The point was never that Trump was the man of the people. The point was that Trump was an anti-elite who was able to better represent the people. That's the basis of his entire campaign. And the same thing is true in the media, by the way. Everything you're describing is exactly what's happening in the media. The elite media has dominated for 50 years, and it's television news, cable news, newspapers, and these big-name magazines. Now you have the anti-elite. The anti-elite is you, Patrick, and Joe Rogan. There are many more people.
By the way, if you look at the numbers, it's very clear that the masses, the viewers, the readers are leaving the old media and moving to the new media. The existing elites are very angry about this. They're angry and writing all these negative articles about you guys, saying that you're all a bunch of white supremacists and that this whole thing is terrible. Like, this is the way of the world. So we're in the middle of all this. I don't know if "transition" is the right term. It's more like a big battle between the old elite and the new elite.
Patrick: What were the initial seeds of the decline of the elites in the last generation that led to those 11% approval ratings? What do you think that's primarily attributed to?
Marc: There are two theories. One theory is that these approval ratings are wrong, and the other theory is that these approval ratings are right. By "wrong," I mean that these approval ratings are being measured correctly, but people are giving the wrong answers.
If you're the head of CNN or Harvard or you're the head of any of those things and your approval rating is only 11% ... By the way, Gallup has been doing a really amazing survey for 50 years called "Trust in Institutions." You can Google "2024 Gallup Trust in Institutions Survey" and you'll see some really spectacular graphs and you'll see that trust in institutions basically peaked in the late 1960s and early 1970s and then it's been going down.
This, by the way, predates the internet. It's interesting that it's been blamed on the internet, but it predates the internet. So this is a phenomenon that started to develop in the 1970s and it's been accelerating. And by the way, these approval ratings have been declining even faster since 2020.
They're just sliding down like this and then they just plummet after 2020. Network news, I don't know what the numbers are. It's in the single digits, and people just don't believe it anymore. They don't believe what's being said on the TV news anymore. And by the way, the ratings are going down the same way.
So one theory is, if you're the head of NBC News or CNN or Harvard, your theory might be, "Oh, people are wrong. People are misled, they're deceived, they're deceived by populists and demagogues, they're deceived by disinformation." That's why the idea of "disinformation" has become so popular. … People have been deceived by malicious actors, by populists and demagogues, and it's just a matter of time until we explain to people that they've been deceived. They'll start to believe us again.
So, that's one theory. Another theory is that the elites are corrupt. They're corrupt, they're dysfunctional, they're corrupt, and they're no longer delivering. And under that theory, these numbers, these declines in approval ratings are correct because every time you look at Congress, they're just spending your money on all kinds of crazy things without a care in the world. If you go to CNN or NBC News, they're always lying to you about a thousand different things. If you go to Harvard, they teach you about racial communism, America is evil, blah, blah, these crazy things.
In this theory, people are right, people have seen through these elites. These elites have basically been in power for too long, they have too much power, they haven't been scrutinized enough, they haven't been subject to enough competitive pressure, they've become corrupt in their place, they don't provide services anymore. The reality is probably a bit of both. It's easy for the next demagogue to show up and just start throwing rocks at the people in power and say anything.
If you're a person who doesn't have political power today but wants it, the easiest thing to do is to show up and start yelling that the current elites are corrupt. Maybe that's a little bit true, demagoguery kind of works, or whatever it is, but ... but I think a lot of it is because the elites are corrupt.
My version is pretty straightforward, and Burnham talks about this in the book. He talks about the "circulation of meritocracy." He says that in order for a meritocracy to really stay healthy and real and productive and not corrupt, it needs a constant infusion of new talent. And it does that through a process of meritocracy.
So what it will do is it will identify promising young talent and invite them into the meritocracy. It does that for two reasons. One is for self-renewal. The other is that those are the people most likely to become anti-meritocratic. So, it's also to discourage future competition. So my experience started when I was 22, and it was, "Oh, hey, Mark, we really want you to come to Davos. We really want you to come to Aspen. We really want you to come to New York for this big conference. We really want you to come to the New York Times dinner party. We want you to hang-out with the journalists for 25 years." And that's what I did, and it was like, "Oh, this sounds great. These are the best people in the world. They run everything. They have the best degrees, they graduated from the best schools. They have all the positions of power. They like me. They think I'm great." They kept talking me up, and I came from the cornfields of Wisconsin. I arrived, I entered the elite. All I had to do was never argue with anything. All I had to do was agree with everything that was said in the New York Times, agree with everything that was said in Davos, vote for the candidates you were supposed to vote for, donate to the candidates you were supposed to donate to, and never, ever, ever stray off the rails. And then you're part of the elite.
I have a lot of peers who have done that. Some are now the largest Democratic donors in the world, and they're completely integrated into the elite, and they're there, and they're having a great time, and they think it's all great, it's great. Some think it's great, and maybe it's the right thing to do.
And then some people get to a point where they look around. It's like the story of J.D. Vance. He grew up in rural Kentucky, or Appalachia, Ohio. He ended up at Yale. He ended up being invited into all these inner circles.
Then he finally looked around and he just said, "Wow, these people are not at all what I thought they were. These people are selfish, corrupt, they're lying about everything, they're engaging in speech suppression, they're very authoritarian, they're looting the public treasury.Oh my God, I've been lied to all my life. These people don't deserve the respect that they have, and maybe there should be a new elite in power." So, that's a lot of the debate that's going on right now. Yeah, I'm a case study.
Patrick: If we put on a pair of optimistic glasses, you emphasize early-stage venture capital. You meet all these young, smart people who are about to go build the future. Let's put on a pair of optimistic glasses and assume that AI has the most positive impact in all the areas where we can verify the results. The reasoning has become so powerful.
So what are some other related bottlenecks that are going to prevent the technological revolution that we're hoping for? That could be clinical trials in medicine, or something that's progressing slower than AI, which is not a problem with AI. We're going to be hungry for progress.
But the atomic world, or the surveillance world, or the clinical trial world, etc., may be the limiting factor, not intelligence and knowledge. What are the bottlenecks that are most interesting to you?
Marc: The way I've always thought about technological change is that there used to be three lines on a graph, and now there's four lines. So, one is the rate of technological change, which is one line, and everything is generally getting better. And then every once in a while you'll see these discrete jumps, or something gets dramatically better, like what happened with AI last week.
And then you have another line on top of it, which is social change, which is basically, when is the world ready for something new. Sometimes you see this phenomenon where the new thing actually exists before the world is ready, and for some reason it's not adopted. And then five years later or fifty years later it suddenly takes off and grows rapidly. So there's a social layer, and then there's a financial layer on top of that, which is are the capital markets willing to fund it. Can it generate a return? I think the art of being an entrepreneur or a technology investor is to try to straddle those three. So you're trying to back something where the technology is actually ready, society is ready to adopt it, and you can actually get funding for it or take it public and make it public. So you have to align those three curves. A lot of what we do in our day-to-day work is aligning those three curves. The fourth curve has now emerged in the last five years. In the last four years, the overwhelming answer was government. That was very strange and unsettling to me when I first encountered it because I'm not used to that. And I never thought of us as being politically engaged or partisan, or that we really tried to go to Washington for favors. We didn't try to go get subsidies. But we also didn't think we needed to do anything to avoid being stepped on. And then all of a sudden this happened.
Patrick:What was the way you felt most strongly that this elite wanted to destroy you? How did it manifest itself?
Marc:It roughly coincided with a national shift in sentiment, probably between 2013 and 2017. I grew up in the '90s, and politically I was a Clinton-Gore default Democrat. There was "The Deal" at the time, capital D, which was, yes, you were a Democrat, but Democrats were pro-business, they loved tech, they loved startups. Clinton and Gore loved Silicon Valley. They loved new technology. They were always excited about what we were doing. They were always willing to help us if other countries came against us, or whatever. They were always trying to help us and support us.
Yeah, you can be a pro-business, pro-tech Democrat. That's great. You can make a lot of money. People write great articles about you, and then you give all that money away, and you become a philanthropist, and that's great.
You die, and your obituary says he was a great entrepreneur and a great philanthropist, and everything is wonderful.
Basically, starting in 2013, every aspect of this deal fell apart. And that manifested itself in a lot of ways, but first and foremost in the media coverage. The official apparatus of the mainstream media began to turn on us, and everything we did was evil. It was actually quite surprising. In 2012, social media was seen by the mainstream media as an absolute, unalloyed good because it helped get Obama re-elected, and…
Everybody knew that it would just elect the right political candidates, and…. And then by 2016, the narrative had completely flipped. Social media and the internet and technology were destroying democracy, and everything was undermined. So, the media coverage was like the canary in the coal mine.
Part of it was the radicalization of the employee base, by the way. There was this weird situation where these big investment managers showed up and demanded that you take radical political positions in your firm, which was completely ridiculous at the time. And then eventually, the government itself showed up, the bureaucracy in the Trump administration started doing this, which was beyond his direct control.
But under the Biden administration, it became an organized campaign of what I would describe as sabotage, with endless prosecutions, investigations, Wells notices, debanking, censorship, attacks, attempts to destroy the entire industry in general. And that, of course, is ultimately why we're reacting. My hope is that this is over. That is, the new administration is taking a very different approach and not doing all of these things anymore.
And then my hope is that the next Democratic administration will realize that attacking tech and attacking startups is not actually necessary. In fact, it can be counterproductive because if you take Elon Musk out of your camp, there are consequences. I talk to a lot of Democrats, we support a lot of Democrats at the company, a lot of congressmen and senators, and I'm going to go talk to them again next week.
Basically, what they're telling me is, look, there's a civil war going on within the Democratic Party between those of us on one side who think the party should move back to the center and stop attacking capitalism and attacking business and attacking technology and just get back to winning elections.
And then there are those who think the party actually needs to become more radical, we need to be more separate from the other side, we need to become more extreme on economic policy, on tech policy, and on social policy. And they're fighting over that. My hope is that they'll move back to the center,
Sky Mavis announces Kaidro, an anime-style RPG on Ronin blockchain, featuring collaborative gameplay, a planned TV series, and Web3 fan engagement through NFT ownership.
Apple's PQ3 in iOS 17.4 introduces quantum-resistant encryption for iMessage, enhancing security against future quantum computing threats, and setting new industry standards for messaging app privacy.
Citrea aims to integrate zero-knowledge rollups with Bitcoin, enhancing its scalability and utility, backed by a $2.7 million seed round led by Galaxy Ventures.
Greg Solano becomes CEO of Yuga Labs, focusing on creativity, partnerships, and advancing the BAYC and Otherside projects, promising a renewed vision and market positivity around BAYC's future.
Sam Bankman-Fried faces legal complexities, waiving right to conflict-free representation, as sentencing looms following FTX’s downfall and his fraud conviction.
Do Kwon, Terraform Labs' co-founder, faces extradition to the U.S. for charges related to a "multi-billion dollar crypto asset securities fraud" following the 2022 UST collapse.
Gyeonggi Province, South Korea, implements an electronic system to swiftly identify and tax cryptocurrency tax evaders, recovering $4.6 million in owed taxes, setting a precedent in digital asset regulation.
South Korean police warn of crypto scammers in chat app "reading rooms," deceiving investors with false promises, leading to an investigation and calls for vigilance against such fraudulent schemes.
Arthur Hayes criticized Cardano's dApps for lack of utility, engaging in a heated exchange with co-founder Charles Hoskinson, while questioning ADA's value and suggesting Ethereum as a better option.
Startale Labs secures $7M in seed funding from Samsung, UOB, and Sony for Web3 expansion, aiming to lead Web3 adoption in Asia with innovative products and strategic partnerships.