Marc Andreessen on AI and Dynamism (Ep. 206- BONUS)

Might the kids be alright?

In this interview, recorded at a16z’s 2024 American Dynamism Summit, Tyler and Marc Andreessen engage in a rapid-fire dialogue about the future of AI, including the biggest change we’ll see in the next five years, who will gain and lose status with the rise of LLMs, why open-source is important for national security, the best and worst parts of Biden’s AI directive, the most underrated energy source, what the US can do to speed up AI deployment, what gives Marc optimism about Gen Z, which thinker helps him make sense of American capitalism, and more.

To hear more conversations from a16z’s American Dynamism Summit, please go to

Watch the full conversation

Subscribe on Apple Podcasts, Spotify, or your favorite podcast app to be notified when a new episode releases.

Recorded January 30th, 2024

Read the full transcript

Special thanks to listener Derk Cullinan for sponsoring this transcript.

TYLER COWEN: Hello, Marc. If your entrance music were to be Beethoven, which symphony and why?

MARC ANDREESSEN: [chuckles] For those of you who do care and know who Tyler Cowen is, this is how his podcasts start. This is how he intimidates his guests into submission. First of all, I just wanted to say thank you, everybody, for being here with us today. We’re really grateful that you were all able to spend time with us. Hopefully, it’s been useful. Second is I’m going to get new business cards printed up that say, “You either know who I am, or you don’t care.” [chuckles]

COWEN: Or both.

ANDREESSEN: Or both. Let’s see. I guess we have to rule out Beethoven’s Ninth Symphony because that’s the official music of the European Union, is that right?

COWEN: That’s correct.

ANDREESSEN: That’s the official anthem of the European Union.

COWEN: That’s right.

ANDREESSEN: Which is just such a terrible, mean thing for them to do to such a great piece of music like that. We should lodge a formal diplomatic protest. I guess probably Beethoven’s Fifth in retaliation.

COWEN: I would peg you as the Fifth.


COWEN: Now, how will AI make our world different five years from now? What’s the most surprising way in which it will be different?

ANDREESSEN: [chuckles] There’s a great breakdown on adoption of new technology that the science-fiction author Douglas Adams wrote about years ago. He says any new technology is received differently by three different groups of people. If you’re below the age of 15, it’s just the way things have always been. If you’re between the ages of 15 and 35, it’s really cool, and you might be able to get a job doing it. If you’re above the age of 35, it’s unholy and against the order of society and will destroy everything.

AI, I think, so far is living up to that framework. What I would like to tell you is AI is going to be completely transformative for education. I believe that it will.

Having said that, I did recently roll out ChatGPT to my eight-year-old. I was very, very proud of myself because I was like, “Wow, this is just going to be such a great educational resource for him.” I felt like Prometheus bringing fire down from the mountain to my child. I installed it on his laptop and said, “Son, this is the thing that you can talk to anytime, and it will answer any question you have.” He said, “Yes.” I said, “Well, no, this is a big deal. It answers questions.” He’s like, “Well, what else would you use a computer for?” I was like, “Oh, God, I’m getting old.”


I actually think there’s a pretty good prospect that kids are just going to pick this up and run with it. I actually think that’s already happening, right? ChatGPT is fully out, and Bard and Bing and all these other things. I think kids are going to grow up with basically — you could use various terms, assistant, friend, coach, mentor, tutor, but kids are going to grow up in this amazing back-and-forth relationship with AI.

Anytime a kid is interested in something, if there’s not a teacher who can help with something or if they don’t have a friend who’s interested in the same thing, they’ll be able to explore all kinds of ideas. I think it’ll be great for that. I think it’s obviously going to be totally transformative and feels like warfare, and you already see that. The concern, quite honestly — I actually wrote an essay a while ago on why AI won’t destroy all the jobs.

The short version of it is because it’s illegal to do that because so many jobs in the modern economy require licensing and are regulated. I think the concern would be that there’s just so much glue in the system now that prevents change. It’ll be very easy to not have AI healthcare or AI education or whatever because, literally, some combination of doctor licensing, teacher unions, and so forth will basically outlaw it. I think that’s the risk.

COWEN: If we think of AI and its impact in sociological terms, large language models, who will gain in status and who will decline in status? How should this affect how we think about policy?

ANDREESSEN: First of all, it’s important to qualify exactly what’s going on with large language models, which is super interesting. This thing has happened that you read about a lot in the press, which is, there was this general idea that there would be something called AI at some point, and then large language models appeared. And everybody said, “A-ha, that’s AI just like we thought it would be,” and then everybody extrapolates out.

That’s true to a certain extent, but the success of large language models is very unexpected in the field. Actually, the origin story of even ChatGPT is, this is not what OpenAI actually started to do. They started to do something different. There’s actually one guy. Actually, his name is, I think, Alec Radford. He literally was off in the corner at OpenAI working on this in 2018, 2019. Then it basically was this revolution building on work that had been done at Google. It was this very surprising thing.

Then it’s important to qualify how it works because it’s not just like some sort of robot brain. What it is, is you basically feed essentially, ideally all known human-generated information into a machine, and then you let it basically build a giant matrix of numbers and basically correlate everything.

In a nutshell, that’s what these things are. Then basically what happens is, when you ask it a question or if you ask it to make a drawing or something, it basically traverses. It essentially does a search. It does a search across basically all of these words and sentences and diagrams and books and photos and everything that human beings have created. It tries to find the optimal path through that. That’s how it generates the answer that it gives you.

Philosophically, it’s this really profound thing, I think, which is it’s basically staring. It’s like you as an individual using this machine to stare at the entirety of the creation of all human knowledge and then have it played back at you. It harnesses the creativity of thousands of years of human authors and artists and then derives new kinds of answers or new kinds of images or whatever. Fundamentally, you’re in interaction with our civilization in a very profound way.

In terms of who gains and who loses status, there’s actually a very interesting thing happening in the research right now. There’s a very interesting research question for the impact on job scale, for example, for people who work with words or work with images and are starting to use these technologies in the workforce. The question is, who benefits more? The high-skilled worker — and think lawyer, doctor, accountant, whatever, graphic designer — the high-skilled person who uses these tools to become an additional quantum leap high-skilled. That would be a theory of separation.

The other scenario is the average or even low-skilled worker who gets upgraded. Of course, just the nature of the economy there. There are more people in the middle. At least there’s been a series of research studies that have been coming back that it’s actually the uplift to average. It’s actually more significant than the uplift to the high skill level. Actually, what seems to be happening right now is it’s actually a compression by lifting people up. Social questions are often a zero-sum game of who gains and who loses. There may be something here where just a lot of people just get better at what they do.

COWEN: Why is open-source AI in particular important for national security?

ANDREESSEN: For a whole bunch of reasons. One is, it is really hard to do security without open source. There are actually two schools of thought on information security, computer security broadly, that have played out over the last 50 years. There was one school of security that says you want to basically hide the source code, and you want to hide the source code precisely. This seems intuitive because, presumably, you want to hide the source code so that bad guys can’t find the flaws in it, right? Presumably, that would be the safe way to do things.

Then over the course of the last 30 or 40 years, basically, what’s evolved is the realization in the field (and I think very broadly) that actually, that’s a mistake. In the software field, we call that “security through obscurity,” right? We hide the code. People can’t exploit it. The problem, of course, is: okay, but that means the flaws are still in there, right?

If anybody actually gets to the code, they just basically have a complete index of all the problems. There’s a whole bunch of ways for people to get the code. They hack in. It’s actually very easy to steal software code from a company. You hire the janitorial staff to stick a USB stick into a machine at 3:00 in the morning. Software companies are very easily penetrated. It turned out, security through obscurity was a very bad way to do it. The much more secure way to do it is actually open source.

Basically, put the code in public and then basically build the code in such a way that when it runs, it doesn’t matter whether somebody has access to the code. It’s still fully secure, and then you just have a lot more eyes on the code to discover the problems. In general, open source has turned out to be much more secure. I would start there. If we want secure systems, I think this is what we have to do.

COWEN: What’s the biggest adjustment problem governments will face as AI progresses? For instance, if drug discovery goes up by 3X, all of a sudden, the FDA is overloaded. If regulatory comments are open, AI can write great regulatory comments. What does government have to do to get by in this new world?

ANDREESSEN: By the way, hopefully, at least the first of those two scenarios happens, maybe also the second. For anything like this, what there should be is there should be a corresponding phenomenon happening on the other side, right? The government correspondingly then should be using AI to evaluate new drugs. Company shows up in their drug design. There should be AI Assist to the FDA to help them evaluate new drugs.

A regulatory agency that has public comments should have AI Assist for being able to be able to process all that information and be able to aggregate it and then be able to reply back to everybody. This is kind of true. This is a very interesting thing about AI. Every possible threat you can think of AI posing, basically, there is a corresponding defense that has to get built.

I’ll pick another one. Cybersecurity people are quite, I think, legitimately concerned that AI’s going to make it easier to actually create and launch cybersecurity attacks. Correspondingly, there should be better defenses. There should be AI-based cybersecurity defenses. By the way, we see the exact same thing with drones. Weaponized AI autonomous drones are clearly a threat, as we see in the world today, so we need AI defenses against drones. The cynical view would be this is just a classic arms race — attack, defense, attack, defense — and does the world get any better if there’s just more threats and more defenses?

I think the positive way of looking at it is, we probably need these defenses anyway, right? Even if we didn’t have AI drug discovery, I think we should be using AI to evaluate drugs. Even if we didn’t have AI drones, we should still have defense against standard missiles and against enemy aircraft. Even if we didn’t have AI-driven cyberattacks, we should have AI-driven cyber defenses. I think this is an opportunity for the defenders to not only keep up but also build better systems for the present-day threat landscape also.

COWEN: The Biden AI directive, what’s the best thing about it? What’s the worst thing about it?

ANDREESSEN: The best thing about it is it didn’t overtly attempt to kill AI. That was good. You never know with these things, how much teeth they’re going to try to put into it. Then, of course, there’s always the question of whether it stands up in court. Look, there were things that were being discussed in the process that were much worse, and I think much more hostile to the technology, than ended up being in it. I think that’s good news. I think it was quite benign in terms of its just flat-out directives, which is good.

People have different opinions. My opinion, the issue of it is, it green-lit essentially 15 different regulatory agencies to basically put AI under their purview in undefined ways. We will now have, I think, a relatively protracted process of many regulators or many agencies without explicit authority in the domain basically inserting themselves into the space. Then presumably, at some point, there will be a determination of who has purview over what, but it seems like we’re in for a period of quite a bit of confusion as a result.

COWEN: How much more green energy do we need to, in essence, fuel all of this AI, and where will it come from? What do you see the prospect is like for the next 20 years?

ANDREESSEN: The good news of AI — and the good news also, by the way, with crypto because there’s always a lot of controversy around crypto and Web3 and blockchain around energy use. The good thing with these technologies, the good news from energy is these systems lend themselves to centralization of data centers, right? If we need a million to go into 10 million go to 100 million to a billion AI chips, they could be distributed out all over the place, but they can also be highly centralized.

Because you can highly centralize them, you can think not just in terms of building a server. You can think about building, basically, a data center that’s an integrated thing from the chip, basically, all the way to the building or to the complex of buildings. Then, the way those modern data centers are built by the leading-edge companies now is, they’re built on day one with an integrated strategy for energy and for cooling.

Basically, any form of energy that you have that you could do in a very efficient way, in a very clean way, or new energy technologies, AI is a use case for developing and deploying that kind of power. Just building on what we’ve seen from internet data centers, that could be geothermal, that could be hydroelectric, that could be nuclear fission, that could be nuclear fusion, solar, wind, big battery packs, and so forth. I think the aspirational hope would be, this is another catalyst, a more advanced rollout of energy. Even if there’s net energy increase, the motivation to get to higher levels of efficiency will be net good and helping us get to a better energy footprint.

COWEN: Which of those energy sources in your view is most underrated?

ANDREESSEN: Oh, nuclear fission, for sure, is the most underrated today. Yes, wave a magic wand, we ought to be doing what Richard Nixon proposed in 1971, right? We ought to build what he called Project Independence, which built 1,000 new nuclear power plants in the US and then cut the entire US grid over to nuclear and electricity, go to all-electric cars, do everything else.

Richard Nixon’s other great corresponding creation, the Nuclear Regulatory Commission, of course, guarantees that won’t happen. [chuckles] The plan [isn’t] exactly on track. But we could, either with existing nuclear-efficient technology or there’s actually a significant number now of new nuclear-efficient startups, as well as fusion startups working on new designs. And so this would certainly be a great use case for that.

COWEN: If the nations that will do well in the future are strong in AI and strong in energy, thinking about this in terms of geopolitics, which countries rise in importance, for better or worse?

ANDREESSEN: Well, okay, different things. Add a couple more things to that, which is, which [countries] are in a position to best invent these new technologies? Then there’s a somewhat separate question of who’s in the best position to deploy because it doesn’t help you that much to invent if you can’t deploy it. I would put that in there.

Look, I would give the US very, very high marks on the invention side. I think we’re the best. I think we have the best R&D innovation capability in the world in most fields — not all, but most. I think that’s certainly true of AI. I think that’s, at least, potentially true in energy. I don’t know whether it actually is, but it could be. We should be able to forge ahead on that. China is clearly the other country with critical mass in all of this. You could quibble about the level of invention versus fast follow and talk about IP acquisition, things like that.

Nevertheless, whatever your view is, they’re moving very quickly and aggressively and have critical mass, big internal domestic market, and a huge number of researchers and a lot of state support. I think, by and large, for sure on AI, and then I think probably also in energy, we’re probably looking at primarily a bipolar world for quite a while and then spheres of influence going out.

I would say Europe is a dark horse in a strange way, in that the EU seems absolutely determined to ban everything, to put a blanket ban on capitalism and, within that, ban AI and ban energy. On the other hand, we have this incredible AI company called Mistral in France, which is the leading open-source AI company right now and one of the best AI companies in the world. The French government has actually really been stepping up to help the ecosystem in Europe. I would actually like to see a tripolar world. I’d like to see the EU fully punch in, but I’m not sure how realistic that is.

COWEN: Let’s say you’re in charge of speeding up deployment in the United States. What is it you do? State level, local level, feds, what should we all be doing?

ANDREESSEN: Of AI specifically?

COWEN: Everything.

ANDREESSEN: Of everything?

COWEN: Because it’s all increasingly interrelated, right?

ANDREESSEN: Yes, it is.

COWEN: AI, energy, biomedicine, everything.

ANDREESSEN: Yes, and AI takes you straight to chips, which takes you straight to the CHIPS Act, which —

COWEN: Exactly.

ANDREESSEN: — has not yet resulted in the creation of any chip plants, although it might someday. Look, the most basic observation is maybe the most banal, which is: Stagnation is a choice. Decline is a choice. As Tyler has written at great length, the US economy downshifted its rate of technological change basically since the 1960s. Technological change is measured by productivity. Growth in the economy was much faster prior to the last 50 years than the most recent 50 years.

You have a big argument as to exactly what caused that, but a lot of it is just an imposition of just blankets and blankets and blankets of regulation and restrictions and controls and processes and procedures and all the rest of it. Then you could start by saying, step one is do no harm. This is our approach on AI regulation, which is, don’t regulate the technology. Don’t regulate AI as a technology any more than you regulated microchips or software or anything like operating systems or databases.

Instead, regulate the use cases. The use cases are generally regulated anyway. It’s no more legal to field a new AI-designed drug without FDA approval than it is a standard-designed drug. Apply the existing regulations as opposed to hamstringing the technology. That’s one. Energy exploitation — again, the energy is just pure choice. We could be building the 1,000 nuclear plants tomorrow.

My favorite idea there, which always gets me in trouble and so I can’t resist, is the Democratic administration should give Koch Industries the contract to build 1,000 nuclear reactors. Everybody gets revenge at everybody else. The Democrats get Charles Koch to fix climate change, and then Charles gets all the money for the contracts. Everybody ends up happy. Nobody yet has bit on that idea when I pitched it, but maybe I’m not talking to the right people. Look, we could be doing that. We’ll see if we choose to.

The chips plant thing is going to be fascinating to watch. We passed the CHIPS Act. In theory, the funding is available. The American chip companies are generally pretty aggressive, and I think trying pretty hard to build new capacity in the US. There was this actually very outstanding article in the New York Times some months back by Ezra Klein, where he goes through and he says, “Okay, even suppose the money’s available to build chip plants, is it actually possible to build chip plants in the US?” He talks about all of the different regulatory and legal requirements and obligations that get layered on top. It was speculating as to whether any of these plants will actually get built.

Again, I think here we have just a level of fundamental choices to society, which is, do we want to build new things? I can’t say, at least on the West Coast, how exciting it’s been for Las Vegas to get the Sphere because it’s now impossible to visit Las Vegas without — everybody is always complaining, “The Egyptians built the pyramids. Where are our pyramids?” It’s like, “Ah, we have a Sphere.”


Just like flying into Vegas just gets your juices flowing, gets you all fired up because this thing is amazing. By the way, I’m just talking about the view from the outside. I understand that the thing on the inside is also amazing. We clearly can do that, at least in Vegas. Where Ben lives now, in London, I think they just gave up on building the Sphere, so that’s the other side of it. We do have to decide whether we want these things to happen. It was a little bit dispiriting to see the liquid natural gas decision that just came down.

COWEN: Are the roots of this stasis quite general and quite cultural? Because parents coddle their children much more, there are higher rates of mental illness amongst the young, young people — it seems — have less sex, along a lot of cultural variables — the percent of old music people listen to compared to new music. There seems to be a more general stagnation. How would you pinpoint our loss of self-confidence or dynamism? Where’s that coming from?

ANDREESSEN: Well, first of all, to be clear, we’re very much in favor of young people not dating because that’s very distracting from their work at our startups.


That works out fine. Unfortunately, in our industry, we have a long experience with not having dating lives when we’re young, so that works out well. [chuckles] It’s not all bad. It is really interesting. Look, Silicon Valley has all kinds of problems, and we’re a case study for a lot of it. Look, it’s not like you can build anything in Silicon Valley, right? Our politicians absolutely hate us. They don’t let us do anything if they can avoid it. We have our issues.

The view from the Valley is, yes, a lot of kids are being brought up and trained to basically adopt a fundamentally pessimistic or, how to put it, stagnation-oriented, inert, have very low expectations. Basically, a lot of what passes for education now is teaching people how to complain, which they’re very good at. The complaining has reached operatic levels lately. There is a lot of that.

Having said that, look, I’m also actually really optimistic. In particular, I’m actually quite optimistic about the new generation coming up. I think Gen Z, and then I think it’s Gen Alpha, and then it’s whatever an eight-year-old is. We’re seeing more and more kids that are coming up, and they’re being exposed to a full load of basically cultural programming education, programming that says you should be depressed about everything. You should be upset about everything. You should have low ambitions. You shouldn’t try to do these things. They’re coming out with a very radical hard shove in the other direction. They’re coming up with tremendous energy and tremendous enthusiasm to actually do things, which is very natural because kids rebel.

If the system is teaching stagnation, then at least some kids will come up the other way and decide they really want to do things in the world. I think entrepreneurs in their 20s now are a lot better than certainly my generation. They’re frankly more aggressive than the generation that preceded them, and they’re more ambitious. Now, we’re dealing with a minority, not a majority, but I think there’s quite a bit — every hour I get that I can spend at 20-year-olds is actually very encouraging.

COWEN: One emotional sense I get from your walk-on music, Beethoven’s Fifth Symphony, is just that the stakes are remarkably high. Now, if we’re looking for indicators to keep track of whether, in essence, things are going your way — greater dynamism, freedom to build, willingness to build, American dynamism — what should we track? What should we look at? How do we know if things are going well?

ANDREESSEN: Look, I do not come here and do not come to the world with comprehensive answers. The overall answer is, productivity growth in the economy is a great starting point. Economic growth is a great starting point. The overall questions are there. Most of our economy is dominated by incumbent institutions that have no intention, I don’t think, of changing or evolving unless they’re forced to.

Certainly, most of the business world now is one form of oligopoly or another that has various markets locked up. I don’t think there’s some magic bullet to hugely accelerate things. Having said that, I think attacking from the edges is the thing that can be done, which is basically what we do, what Silicon Valley does. Then when you attack from the edges the way that our entrepreneurs do, a lot of the times, they don’t succeed. It’s a high-risk occupation with a lot of risk of failure.

When they succeed, they can succeed spectacularly well. We have companies in American economy that were venture-backed in the 1970s and, actually, even some that were venture-backed in the 1990s and 2000s that are now bigger than most national economies, right? Was it Apple? I think Apple’s market cap is bigger than the entire market cap of the German stock market.

COWEN: I think that’s right.

ANDREESSEN: Just one company. Apple was a venture-backed startup. Two kids in a garage in 1976, not that long ago. It’s bigger than the entire German industrial public market. Attacking from the edges, sometimes you can get really, really big results. Sometimes you just prod the system. Sometimes you just spark people into reacting, and that pushes everything forward.

Then the other question always is just like, from our standpoint, what are the tools that startups have to try to really change things? There’s a bunch of such tools, but there’s always two that really dominate. One is just, what’s the magnitude of the technological change in the air that can be harnessed? We’re always looking for the next super cycle, the next breakthrough technology, which you can imagine 1,000 companies doing many different things, all punching into incumbent markets. AI certainly seems like one of those.

Then, yes, the other is just the sheer animalistic ambition, energy, animal spirits of the entrepreneurs and of the teams that get built. Like I said, I think the best of the startups today are more aggressive, more ambitious, more capable. The people are better. They execute better than at least I’ve ever seen. I think that’s also quite positive.

COWEN: Who’s a social thinker who helps you make sense of these trends?

ANDREESSEN: Oh, yes, my favorite is James Burnham. He’s my favorite.

COWEN: Why Burnham?

ANDREESSEN: Why Burnham? Burnham is not famous, but he should be famous. Burnham has a fascinating story. He’s a thinker in the 20th century who talked a lot about these issues. He started out life, as a lot of people do in the 1920s and ’30s, as a dedicated Trotskyite, full-on communist. He’s a very special guy. Burnham was a very brilliant guy. He was such a dedicated communist that he was close personal friends with Leon Trotsky, which is how you really know how you’ve made it when you’re a communist.

He would have these huge arguments with Trotsky, which is not the safest thing in the world to do. Apparently, he got away with it. Very enthusiastic communist revolutionary through the ’30s. Then in the ’40s, he’s a very smart guy and he started to figure out that was a bad path. He went through this process of rethinking everything. By the 1950s, he was so far to the right that he was actually a co-founder of National Review magazine with William Buckley, who always said he was the intellectual leading light at National Review.

He’s got works that he wrote that will accommodate the full spectrum of politics. In his middle period where he was trying to figure out — this is like in the 1940s — he was trying to figure out where things are going. There were enormous questions in the 1940s because it was viewed as a three-way war for the future between communism on the far left, fascism on the far right, and then liberal democracy floating around there somewhere.

His best, most well-known book is called The Managerial Revolution, which talks a lot about the issues we’ve been discussing, and it was written in 1941. It’s fascinating for many reasons, part of which is, he was still mad about communism. He debunks communism in it. Also, they didn’t know who was going to win World War II. It talks about this battle of ideologies as if it were still an open topic, which is super interesting.

He did this very Marxian analysis of capitalism. He made the observation that I see every day, which is there are fundamentally two types of capitalism. There’s the original model of capitalism, which he calls bourgeois capitalism, which you could think like Henry Ford is the archetype of that. A capitalist starts a company, runs the company, name on the door, ownership of the company, control the company, dictator of the company, complete alignment of a company with an individual.

Then he talks about this other emerging form of capitalism at that time called managerial capitalism. In managerial capitalism, you think about today’s modern public companies. Think about Walmart or whatever, any public company where, in theory, there are shareholders. Really, what there are is, there are millions and millions of shareholders that are incredibly dispersed. Everybody in this room owns some three shares of Walmart stock in a mutual fund somewhere.

You don’t wake up in the morning wondering what’s happening to Walmart. It doesn’t even occur to you to think about yourself as an owner. What you get instead is this managerial class of actually both investors like fund managers and then also executives and CEOs who actually run these companies. They have control, but without ultimate responsibility, without ultimate ownership.

The interesting thing he said about that is, he said, “Look, managerialism is, basically, it’s not that it’s good or bad. It just is necessary because companies and institutions and governments and all the rest of it get to the point where they’re just too big and too complicated for one person to run everything. You’re going to have the emergence of this managerial class who’s going to run things.”

There’s a flip side of it, which is, the people who are qualified to be managers of large organizations are not themselves the kind of people who become bourgeois capitalists. They’re the other kind of person. They’re often good at running things, but they generally don’t do new things. They generally don’t seek to disrupt or seek to create or seek to invent.

One way of thinking about what’s happened in our system is, capitalism used to be bourgeois capitalism. It got replaced by managerial capitalism without actually changing the name. That will necessarily lead to stagnation. By the way, that may be necessary that that happens because the systems are too complicated, but that will necessarily lead to stagnation.

Then what you need is, basically, the resumption of bourgeois capitalism to come back in and, at the very least, poke and prod everybody into action. That, aspirationally, is what we do and what our startups do.

COWEN: Marc Andreessen, thank you very much.

ANDREESSEN: Good. Great. Thank you, everybody.


Photo Credits: Yassine El Mansouri/elman studio, llc.