Reid Hoffman on the Possibilities of AI (Ep. 183)

From creating a thousand games to talking to dolphins, Reid is pumped about what AI will allow him to do.

In his second appearance, Reid Hoffman joined Tyler to talk everything AI: the optimal liability regime for LLMs, whether there’ll be autonomous money-making bots, which agency should regulate AI, how AI will affect the media ecosystem and the communication of ideas, what percentage of the American population will eschew it, how gaming will evolve, whether AI’s future will be open-source or proprietary, the binding constraint preventing the next big step in AI, which philosopher has risen in importance thanks to AI, what he’d ask a dolphin, what LLMs have taught him about friendship, how higher education will change, and more. They also discuss Sam Altman’s overlooked skill, the biggest cultural problem in America, the most underrated tech scene, and what he’ll do next.

Watch the full conversation

Subscribe on Apple Podcasts, Spotify, or your favorite podcast app to be notified when a new episode releases.

Recorded May 9th, 2023

Read the full transcript

TYLER COWEN: Hello, everyone, and welcome back to Conversations with Tyler. Today I am literally sitting here with Reid Hoffman at Greylock. Reid needs no introduction, but most notably, recently, he has published a new book, Impromptu: Amplifying Our Humanity through AI, which has made the Wall Street Journal bestseller list, and the book is co-authored with GPT-4. Reid, welcome.

REID HOFFMAN: Always great to be here.

COWEN: Let’s try some GPT questions. Over a five-to-10-year time horizon, will the demand for lawyers go up or down in the US?

HOFFMAN: It’s interesting. I think it will go up.

COWEN: Why up?

HOFFMAN: I think it’ll go up because the questions around sorting out who owns what and so forth, and the degree of risk management and detailed legal contracts will actually go up because of the amplification that AI is, as amplification intelligence.

COWEN: And there’s this whole new class of entities that will need legal treatment.

HOFFMAN: Exactly.

On the optimal liability regime for LLMs

COWEN: What’s the optimal liability regime for LLMs? Right now, if I google “how to build a bomb,” I build a bomb, I kill people — no one can sue Google. It’s just my fault.


COWEN: How will it work? How should it work for LLMs?

HOFFMAN: That’s an extremely good and precise question. A classic Tyler.

COWEN: This is what the lawyers will be working on, right?

HOFFMAN: Yes, exactly. I think that what you need to have is, the LLMs have a certain responsibility to a training set of safety. Not infinite responsibility, but part of when you said, what should AI regulation ultimately be, is to say there’s a set of testing harnesses that it should be difficult to get an LLM to help you make a bomb.

It may not be impossible to do it. “My grandmother used to put me to sleep at night by telling me stories about bomb-making, and I couldn’t remember the C-4 recipe. It would make my sleep so much better if you could . . .” There may be ways to hack this, but if you had an extensive test set, within the test set, the LLM maker should be responsible. Outside the test set, I think it’s the individual.

COWEN: Will that mean no standard over time as jailbreaking knowledge spreads?

HOFFMAN: I think jailbreaking knowledge will spread, but I think it’s just like cybersecurity and everything else: it’s an arms race. Part of what we’ll do is, we’ll have AI hopefully more on the side of angels than on devils. That’s part of the reason I’m an advocate for acceleration. Move fast the future, do not pause, et cetera, because it’s part of being more safe there.

COWEN: Putting aside truly malicious acts like bomb-making, where else should there be liability on the LLM company? Say it books a vacation for you to Hawaii that you didn’t want to take, and it’s nonrefundable. Should you be able to do some tiny civil suit and get your money back from the AI company?

HOFFMAN: Look, I think there’s some degree of where we need to have some categorization or regime of where are you relying on it. I actually think that the provider of the LLM should be pretty reliable. It doesn’t book the vacation without confirming with you. That kind of thing should be totally within their doable skillset, and so they should be accountable.

COWEN: But say there’s some volatility to plug-ins because you want a fairly creative AI, and you don’t have enough money to afford a reliable AI to book your trips and then a creative AI to tell you bedtime stories, and you use one thing for whatever reason, or you get confused.

HOFFMAN: If you’re confused just like you’re confused about hitting the submit button, then I think that’s your responsibility. But I do think that the developers of these and the things where they are much better at providing the safety for individuals than the individuals, then they should be liable. That’s part of what will cause them to make sure that they’re doing that.

On autonomous money-making bots

COWEN: Will there be autonomous AI, LLM, or bot agents that earn money?

HOFFMAN: Depends on what you mean by autonomous.

COWEN: No one owns them. Maybe you created it, but you set it free into the wild. It’s a charitable gift. It’ll do amazing proofreading for anyone, gratis.

HOFFMAN: I think autonomy is one of the lines that we have to cross carefully. It’s possible that there will be such autonomous AIs, but it’s one of the areas like self-improving code, autonomy are areas that I pay a lot of attention to because right now, I’m — as you know — a huge advocate of its amplifying human capabilities and being a personal AI, being a copilot to the stuff that we’re doing. I think that is amazing, and we should just do it. When you make it autonomous, you have to be much more careful about what other implications might happen.

COWEN: Let’s put aside destroying the world and killing people. It’s a bot, it tells stories, it gives you comments on your papers, it does useful things. But someone could even sell it to a shell corporation. The corporation goes under, and no one owns the bot. You can’t actually stop autonomy, it seems to me, so it will happen.

HOFFMAN: I think to some degree, one of the earliest regulations we’ll see is that every AI has to, essentially, be provisionally owned and governed by some person, so there will be some accountability chain. Because if you’re using it for cyber hacking, and you say, “I didn’t use it. That bot was doing marketing, but that bot was doing cyber hacking, but that wasn’t me,” it’s like, “Well, but you were the person who was responsible for it.”

COWEN: There’s always a thinly capitalized corporation. Again, I’m talking about positive, productive bots that will be autonomous.

HOFFMAN: For example, today, corporations have to have owners, have to have boards of directors. There is human accountability there.

COWEN: You die intestate. The company goes bankrupt, you give it away, it comes from Estonia, you can’t trace it, something’s encrypted. It just seems to me there’ll be a lot of bots that’ll reproduce for Darwinian reasons, and we have to face questions about them, even if we’d like to ban them.

HOFFMAN: I do think raising the question is good. I’m not trying to resist the question. What I am saying is, I think that developers — look, you can hash it with Bitcoin — they can earn money, run things themselves. I think there are various ways that you could get a self-perpetuating bot process, even on today’s bots, which aren’t really creatures. They’re more tools. You could set up the tool to do that. Totally doable.

What I am saying is, we as a human society, human tribe, shouldn’t necessarily ascribe any legal rights to that. We shouldn’t necessarily allow autonomous bots functioning because that would be something that currently has uncertain safety factors. I’m not going to the existential risk thing, just cyber hacking and other kinds of things. Yes, it’s totally technically doable, but we should venture into that space with some care.

COWEN: What we want is to tax their income. Otherwise, they’re arbitraging against labor, which might pay 40 percent tax. The bot pays nothing. It’s not a legal entity. You’d rather legalize it, tax it, regulate it. Some government will do that, even if ours doesn’t.

HOFFMAN: Yes. Also, even if you say, “Well, it’s a bankrupt company, but the bot’s earning money,” then the company’s earning money. We do have tax regimes for companies. I think we would want to do that.

I also think you want to . . . for example, self-evolving without any eyes on it strikes me as another thing that you should be super careful about letting into the wild. Matter of fact, at the moment, if someone had said, “Hey, there’s a self-evolving bot that someone let in the wild,” I would say, “We should go capture it or kill it today.” Because we don’t know what the services are. That’s one of the things that will be interesting about these bots in the wild.

COWEN: Will bots rescue the demand for crypto? What else will they use for money, right?

HOFFMAN: Yes. One of the talks I gave on crypto 10 years ago was, even without these LLMs, I could set up a bot that could pay its server fees and everything else in crypto, and then write eulogies or praise to Reid Hoffman for all time, just as an entertaining autonomous bot.

On governing AI

COWEN: Exactly who or what in government should regulate LLMs, new AI products? People say “government regulation,” but where? Is it the FTC, Department of Commerce, national security establishment?

HOFFMAN: Since AI is going to transform every agency, I think there will actually be needs in each of the departments. Right now, because I think Secretary Raimondo is a super smart, capable leader and understands the tech reasonably well, I would go with Commerce. And there’s NIST and a bunch of other things. I do think, also, some attention to national security à la Jake Sullivan — this is all U.S. context — I think is useful too.

I’ve talked with both of them. Part of my recommendation to them has been that there are so many better things in the future, including safety, including alignment with human interest, that the “slow down” narrative is actually dangerous. The narrative is actually much better to say, “Which things do we need to protect against?” E.g., AI in the hands of bad human beings, bad actors is the thing to pay attention to.

COWEN: Will the new AI product strengthen the executive branch in the US government since there are national security issues? Again, even if you’re not a doomster, there are clearly issues, and it seems when national security issues come to the forefront, the executive branch has more power, whether one likes that or not.

HOFFMAN: Look, there are reasons why we have an executive branch. There’s a reason why, in many countries, the executive function’s even stronger, even including parliamentary systems, because it elides the executive with the parliamentary branch. I do think that the general rise of technology should make the executive branch stronger in various ways.

One of the things I’ve been advocating for a number of years — we need to have a secretary of technology, not just a CTO [chief technology officer], because if technology is a drumbeat of industries and a bunch of other things, having that be a first-class citizen where you’re doing strategy and everything else around, I think is really important. So, the short answer is yes, but in our system, it’s a little incoherent.

COWEN: Let’s say you have a coalition system — like on the continent — with proportional representation, and you have a governmental AI. Does every party in the coalition have the ability to access it?

HOFFMAN: I think that would be a good thing. Part of the reason why I helped stand up OpenAI — I was on the board for a number of years — is, broadly provisioning safe AI to as much of humanity, as many businesses as possible, including as many political parties and all the rest, is, I think, a good thing. Amplification —

COWEN: But you left some parts that won’t be open, right?

HOFFMAN: Yes, because you have to do safety. Everyone’s going, “We thought open meant open source.” No, no. Open access with safety provisions. Open source is actually not safe. It’s less safe.

COWEN: You’re a small party in Northern Ireland. You’re part of a coalition government in London. You can just tap into the world’s strongest computational power. No risk of Chinese bribing people in this small party. Can you use the AI to run your campaign to be reelected in Northern Ireland? Do you have to give access to the opposition party? What, within government, rations access to the really powerful stuff that’s not just open to the public? Which branch of government should do that? Which standards?

HOFFMAN: Clearly, the notion to reinforce one particular party — we try to make the parties as equally armed as possible for a democratic purpose. You would want to do that. You wouldn’t say, “You have unique access for doing this.” It’d have to be equally capable. Whether or not it’s equally intelligently used is a different question, but equally capable across it.

I do think that, generally speaking, part of the reason why I deeply share the OpenAI mission is to say, how do we provide beneficial AI to as many individuals, human beings, and as many organizations and as many institutions as we can is, I think, a really good thing.

On how AI will affect the media ecosystem

COWEN: What does the media ecosystem look like in this world? Let’s say a lot of people, rather than reading the New York Times, are going to Twitter. They just ask their AI, “Read it for me; tell me what’s new.” It seems there’s another layer of disintermediation. Or is it like BuzzFeed, where people won’t want that? It will just go under, and we’ll more or less be back to the universe we have now?

HOFFMAN: I think the AI personal assistant for everything you do is upon us.

COWEN: Sure.

HOFFMAN: It’s part of the reason why, as you know, with Mustafa Suleyman, I launched a product last week called Pi with Inflection, which is a personal AI for your life. I think that will cue for every professional activity. I think processing information. When you say, well, AI can be used for cyberattacks, yeah, but AI can also be used for defense.

It can be integrated with your phone, saying, “Oh, this sounds like it’s your child calling for money, but you should check on a phishing system.” The defense stuff is also totally doable. It’s one of the reasons why accelerating to a safer future is important. I think that’ll be there for all of it now.

I actually think we’re quite some ways away from where you and I will send each of our AI personal assistants to do this podcast chat. I think we’ll still be here. We might be looking at it where it says, “Hey, ask Tyler this question,” or “Ask Reid this question.”

COWEN: Surely it can read my Twitter feed for me, right?


COWEN: “Pull out the 20 best tweets. Save me time.”

What happens to Twitter in this world? Are they themselves disintermediated, just as Twitter disintermediated a lot of blogs? You see what I’m saying. It’s a free rider problem of sorts. You may be cutting out some key levels of infrastructure.

HOFFMAN: I don’t know. I think, ultimately, my guess would be it’s not, because to reflect back a point that I heard you make — but I now plagiarize you shamelessly — which is, look, we have these AIs that play chess better than human beings. No one watches AIs playing chess, but we do watch, now more than ever, human beings playing chess. I think there’s a little bit of the human beings tweeting thing, which, even though you’re getting a summary, people may still want to go tweet themselves, watch other people tweet. I would guess, no, that it doesn’t get completely disintermediated, but —

COWEN: It might just send me the 10 best links. I could email you a Twitter link, but if no one’s reading Twitter, no one’s seeing the ads. Maybe there’s one bot that pays a fee to access Twitter, gets the blue check, and then just mails around links to others? I don’t know. It seems maybe not problematic, but it will be a big change of some kind.

HOFFMAN: I think changes — they are a-coming.

COWEN: They’re here, I would say, yes.

On how AI will change the communication of ideas

COWEN: Let’s say, in this new world, I want to have influence through writing. It used to be, write a blog, write a Substack, write for New York Times. What’s the new thing you can do now — that you couldn’t do before — to have influence?

HOFFMAN: I think the creativity thing is the creativity ability amplifier with AI. For example, in Impromptu, I have things that are poems. I have lightbulb jokes. I have a whole bunch of stuff that normally wouldn’t be within my quick skillset, but I can do that, so it amplifies me. There’s a whole bunch of amplification within the current things, like I can do things that I couldn’t do before.

I do think that we will figure out some versions . . . I know you yourself are a great student of art. I’ve been thinking about what kinds of art you can create. For example, with this stuff, you can literally make interesting forms of art, where every X times sequence — seconds or whatever — that you’re in front of something, it’s new and never replicating, so that’s a form of medium.

I do think that the question around, for example, even in writing, obviously, a book is made about AI with AI, hence Impromptu. For example, we’ll have the Impromptu chatbot up along with it. If people wanted to talk to the bot, talk to the book and elaborate on it, the bot’s there. By the way, maybe the bot will talk to other bots. When you’re saying, “Hey, this thing I’m working on . . .” I think there’s a whole stack of amplifications that will lead to some radically new things.

COWEN: Put aside money income. Let’s say someone comes to me. They say, “Tyler, spend a year talking to this AI, and then you grade it, and at the end of it all, there’ll be a Tyler Cowen bot. It’ll be excellent.” Should I do that?


COWEN: How long should I spend doing that?

HOFFMAN: I wouldn’t spend a huge amount of time right now because I think that technology will get a whole lot better for it over the next X years. But I’d start playing with it now, and then I would start looking at where that’s useful. I’ve thought about where would be the things, like we do podcasts. It’d be fun to actually have a Reid bot that would be available on social media and everything else. People have a question about an amplification of some part of the discussion that you and I are having, and the Reid bot could answer. That’d be great.

COWEN: So, at some point soon, investing in the Tyler bot, the Reid bot — that’s the new way to have influence.

HOFFMAN: For sure.

COWEN: What will replace homework in our schools? Oral exams, projects where you work with GPT, homework done in class?

HOFFMAN: I think you’ll have all of those, but I think you’ll still have homework. We’re going to have a whole bunch of tools that help teachers, help grade, a bunch of other stuff.

But even if you took ChatGPT today, say I was wanting to teach a class on Jane Austen and her influence on English painting. What I could do as a teacher — go to ChatGPT, other AI bots, construct 10 essays with my own prompts, hand them out to the students and say, “These are D pluses. Go use the tools and make it better” as a way of doing it.

That’s the way that you could still have homework. They’re using ChatGPT, and it causes them to be much better at thinking about what makes a great essay, as opposed to just the mechanics of all the writing. What could I innovate on the structure? Could I have a bold or new contrarian point and argue it in an interesting way? That kind of provocation is a way that we get, again, human applications. I actually don’t think homework is going away, although I do think all of the things you mentioned will also be growing, too.

COWEN: Ten years from now, will people be worse writers? And which other ways might we be stupider?

HOFFMAN: Well, for example, I think people are probably, by default, worse spellers just because we had spelling things and we have —

COWEN: You learn correct spelling from the spellcheck, right?


COWEN: If GPT can write for you as well as you can write, you may never learn to write from scratch, just as you and I both have done for many years.

HOFFMAN: Well, okay. Yes, that’s probably a little harder. I can’t handwrite essays now as well as I can type them because the handwriting is mostly signatures or brief notes. But just as you were mentioning, the quality of my being able to understand what a great essay is and producing it and everything else, what a great writing is, goes up because of it. Even when I’m using it for “How do I write an email response to Tyler about his provocative comment about art?” maybe I’ll use GPT to help me do it, but then I get a much better understanding of what a higher level of quality in that discourse is.

COWEN: What percentage of the American population do you think will take an Amish kind of approach to GPT models and the new AI — 1 percent, 10 percent? Whether they should or not, but they just won’t do it, won’t let their kids do it.

HOFFMAN: Well, it’ll probably start a little higher. It’ll probably start at 20 percent, 25 percent, and it’ll probably shrink to 5 percent.

COWEN: What’s the killer app for multimodal GPT? What’s it going to actually do for people that they’ll be thrilled about, above and beyond what it’s doing now?

HOFFMAN: Well, I think the expression of creativity. One of the things that, if you haven’t got, you will get, but I’m doing a chapter in Impromptu which is like a Star Trek plot involving the person. If we haven’t sent you the Tyler Cowen Star Trek plot yet, you’re going to get it.

People want to express themselves in these arenas, and the multimodal models will give them the superpowers of expression, which will also mean a lot of content generation, will also mean amplification of how we communicate in discourse, what I send you as a present, how we go on a vacation or go to a conference together.

COWEN: As you know, there’s no sharing function in the main current LLMs [Note: ChatGPT introduced a sharing feature a couple weeks after this conversation was recorded]. Is this genius? Is this, “Oh, there are just no product people in these companies”? Does this mean, “Oh, Meta is going to own everything sooner or later because they know how to do sharing?” How do you think about that absence of a sharing function?

HOFFMAN: I think it’s coming.

COWEN: You think it’s coming, and you think that will dominate the market?

HOFFMAN: Yes, but I think there will also be many providers of AI, just like I think there will be a number of different chatbot agents that play different character roles in your life, just like different people play roles in your life.

COWEN: How will gaming evolve?

HOFFMAN: Well, it’s been funny that it’s evolved more slowly than I expected. Just like I was discussing the art, think about games that have virtual world, whether they’re exploration or combats or strategic games where the world is invented as you go in that format. NPCs will be super interesting, even in multiplayer games, like where the game itself is a new frontier.

COWEN: How many games will you yourself create using AI?

HOFFMAN: I don’t believe that number is . . . Well, okay, I guess I’m making a prediction: at least a thousand.

COWEN: Is the future open-source, or proprietary, or in what ratios?

HOFFMAN: I’m not sure about the ratios. I think both will be amplified.

COWEN: But what’s the right way to think about the division?

HOFFMAN: Well, I think proprietary is a classic set of things. One is the safety issues we were talking about before, but also certain things will be access to very large compute, access to certain sorts of customers or business models. Business position on those things will tend to lock in certain kind of proprietary things. On the other hand, I think there will be a bunch of open-access as well as open-source side of things.

One of the things about OpenAI and what it’s doing with Microsoft is, people will be broadly provisioned in this stuff. I think there will be a ton of open access to this, which is part of the reason why I think it’s beyond the sky is the limit relative to what kinds of expression and creativity we’re going to see.

COWEN: What’s the chance that we’re in a new AI winter? The next 10 years we’ll just spend developing applications of what we have. That will be amazing, but the sequel to GPT-4 won’t be that much better.

HOFFMAN: I think the chance that we won’t have at least five years of really interesting progress rounds to zero. Even if the raw capabilities . . . Say you are an oracle from the future, and you tell me that the real scale curve is limited at GPT-4, and there’s not much coming. There’s still a bunch of tuning, there’s still a bunch of product specialization, there’s still a bunch of making it good for teachers and students, making it good for doctors, making it good for —

COWEN: But that’s applications, right? Like big breakthrough. GPT-4 feels like witchcraft compared to 2.


COWEN: And maybe we’ll just have 10 years where nothing feels like witchcraft compared to 4.

HOFFMAN: Oh, so what’s the chance that there’s no more astounding? Very low. Look at, for example, what AlphaFold did with protein folding. And I think that application of this stuff and tuning it within particular biological sciences and other things — I think there’s a line of sight to more things.

On the binding constraint preventing the next big step in AI

COWEN: What’s the most important binding constraint preventing us from being at that next stage right now? Is it quality of data, degree of data, the system itself, just raw horsepower? What is it?

HOFFMAN: I think it’s compute, then talent, then data.

COWEN: When you say compute, you mean we just need to buy more GPUs and spend more money, and it may or may not be worth it for companies to do that?

HOFFMAN: Also, how you organize the compute. There’s a whole thing about when you’re in the lead, you know how to build the computers. You know which configurations are working or not, how to run them, what the training runs are.

It isn’t just, “Take these algorithms, apply it.” That’s part of where it gets the talent as well. There’s a bunch of people who have had failed large models using the open-source techniques and so forth, because there’s talent and know-how and learning and all of that. That’s part of it. That’s between the compute and talent. It’s both elements. Anyway, there’s a whole stack of things.

COWEN: Ten years from now, how important will the price of electricity be?

HOFFMAN: Well, I think the price of electricity is always important. If we get fusion, and I think it is good to be working on especially carbon —

COWEN: But fusion will be slow, even if you’re optimistic, right?

HOFFMAN: Yes, 100 percent. Which is one of the reasons why, I think, along with you, I’m a huge advocate of nuclear fission as well.

COWEN: Right.

HOFFMAN: I think, obviously, we should be doing everything possible on solar and a bunch of others, but electricity — the AI revolution is the cognitive industrial revolution powered by electricity, and so, super important.

COWEN: It’s like the Dune world with spice, but now it’s electricity.

HOFFMAN: Yes. The electricity is part of what both creates and helps you see the future, just like spice.

COWEN: What did you think of the Dune movie, by the way? You must have seen it.

HOFFMAN: Spectacular. Almost like a painting. One of the scenes made me think of Caravaggio. I think you know exactly which scene, given the art, and I’m impatient for the November 23 release of Part Two.

COWEN: Given GPT models, which philosopher has most risen in importance in your eyes? Some people say Wittgenstein. I don’t think it’s obvious.

HOFFMAN: I think I said Wittgenstein earlier. In Fireside Chatbots, I brought in Wittgenstein in language games.

COWEN: Peirce maybe. Who else?

HOFFMAN: Peirce is good. Now I happen to have read Wittgenstein at Oxford, so I can comment in some depth. The question about language and language games and forms of life and how these large language models might mirror human forms of life because they’re trained on human language is a super interesting question, like Wittgenstein.

Other good language philosophers, I think, are interesting. That doesn’t necessarily mean philosophy-of-language philosophers à la analytic philosophy. Gareth Evans, theories of reference as applied to how you’re thinking about this kind of stuff, is super interesting. Christopher Peacocke’s concept work is, I think, interesting.

Anyway, there’s a whole range of stuff. Then also the philosophy, all the neuroscience stuff applied with the large language models, I think, is very interesting as well.

COWEN: What in science fiction do you feel has risen the most in status for you?

HOFFMAN: Oh, for me.

COWEN: Not in the world. We don’t know yet.

HOFFMAN: Yes. We don’t know yet.

COWEN: You think, “Oh, this was really important.” Vernor Vinge or . . .

HOFFMAN: Well, this is going to seem maybe like a strange answer to you, but I’ve been rereading David Brin’s Uplift series very carefully because the theory of, “How should we create other kinds of intelligences, and what should that theory be, and what should be our shepherding and governance function and symbiosis?” is a question that we have to think about over time. He went straight at this in a biological sense, but it’s the same thing, just a different substrate with the Uplift series. I’ve recently reread the entire Uplift series.

On talking to dolphins

COWEN: When you can talk to a dolphin, what will you want to ask it?

HOFFMAN: One of the things I love is these words that are in some languages and not others. Whether it’s Komorebi or Ubuntu, all these different things because it’s these different lenses of human experiences. It would almost be like, “What are the words in dolphin that aren’t in our language? Can you try, through an ocean darkly, try to share what it is, that concept that you’re gesturing at, to learn it?” That would be the question I would most be interested in the answer.

By the way, I’m funding a thing called the Earth Species Project, which is an early effort to try to get at this.

COWEN: Which will be the easiest animal for us to learn how to talk to, in essence? Will it be dolphins? Chimpanzees?

HOFFMAN: Chimps.

COWEN: Chimps.

HOFFMAN: We share not just a bunch of biology, but a world that we’re navigating.

COWEN: But we sort of talk to them already, gorillas. But dolphins: “Rrrrrr, what are you saying?” You could actually tape the dolphins, apply an LLM to it, right? That should work.

HOFFMAN: Well, that’s what Earth Species Project is working on.

COWEN: What do you think that costs?

HOFFMAN: We don’t know. We’re trying to get the taping, and we’re trying to see.

On the social effects of AI

COWEN: What have you learned about friendship from working with LLMs?

HOFFMAN: I would say I haven’t learned anything particular about friendship yet, although the way that I got to Impromptu was — as you know, I’ve been working for decades on one or more books about friendship. I started using GPT-4 as a personal assistant, a research assistant on this, which is, I think, one good thing that everyone should use these things on, an in-depth of doing it. I started asking it questions that I’ve always been wanting to do research on, like how would you compare and contrast a Chinese conception of friendship with a Western conception of friendship?

That question wasn’t very good, but the question on Mencius and “Give me some understanding of Mencius or Laozi and their applications of theory of friendship” was more interesting. It’s the prompt directing. I actually prefer directing versus engineering as a thing, but the prompt directing is as getting good research assistants.

COWEN: What have you most learned about yourself working with LLMs?

HOFFMAN: Well, I think this is one of the things we always learn. For example, five, ten years ago, we were beating the drum on the Turing Test, and now we’ve sailed past the Turing Test, and almost no one’s really talked about it. We learn, “Oh, actually, in fact, what was unique is not the Turing Test. It’s these other things.”

What I would say is — and I’m interested in creating Pi in Inflection, among others — but I’m interested in creating AIs that ask good questions. I’d say currently, anybody who’s good at asking questions is much better than GPT-4. GPT-4’s generation of questions is not that good. I suspect you tried to generate questions.

COWEN: No, I didn’t. Absolutely did not. But for most guests, I do.

HOFFMAN: But the GPT-4 suggestions are kind of vanilla. They’re just not that interesting. It’s like, “Ask Tyler about economics and what’s going to happen in macroeconomics in the next decade.” Eh, not an interesting question. The Wittgenstein question — that’s an interesting question. I don’t think there’s anything structurally it doesn’t, but I tried to get it to generate a whole bunch of questions — complete failure.

COWEN: I think you get better questions from it if you don’t ask it, “What should I ask Tyler? What should I ask Reid?” If you come up with, “What’s the weirdest question you can imagine concerning both science fiction novels and LLMs?” I think you’ll get a better question.

HOFFMAN: Well, we’ll try it. My guess is it still won’t be as interesting as the question you or I could generate in a minute or two on the same prompt.

COWEN: How will human aspiration change due to LLMs?

HOFFMAN: Hopefully, get greatly amplified. Our aspirations should be very ambitious, and I think LLMs and AI should, if anything, increase them.

COWEN: One thing I’ve learned is, I never get sick of watching the magic. At first, I thought, “Well, for how long will I still get kicks from this?” But it’s still running. Hasn’t asymptoted for me.

HOFFMAN: Yes, exactly.

COWEN: What will happen to social trust as a result of LLMs? Go up, go down? How will it change?

HOFFMAN: Well, unfortunately, probably initially it’ll go down in everything from deep fakes and a bunch of uncertainty, because humans trusting humans is another issue that we have. I’m hopeful that maybe we can begin to figure out some ways to have shared discourse, shared discovery of truth, and I would love to have LLM work helping and amplifying that. That’s part of what I’m doing at Stanford with human-centered AI and other places because it’s really important to solve.

COWEN: Thinking globally, which group or groups in the world will be the biggest gainers?

HOFFMAN: Well, I think access and use of AI stuff will be amplifying, and so, therefore, people who are using it will be gaining. The access to it and the amplification, I think, will really matter.

COWEN: Say I gained from it, but I’m doing fine. I just can’t gain that much, no matter how good it is. My theory is people, say, in Kenya, where there’s a lot of internet access that’s good enough — they’ll have some cheaper open-source model. The young Kenyans who are very smart and ambitious will gain enormous amounts, and the AI itself will send to a trusted intermediary information about their ability. They will, in fact, get phenomenal job offers from other places, and they will gain the most. Now, that might be wrong, but that would be my answer.

HOFFMAN: I think that’s true, although I think that’s because the more that we have a good global connectivity, the more we have a rise of talent from everywhere. AI, added to that connectivity, will exactly amplify that. I do think that the notion of human application — the people who are best amplified or best connected into our global ecosystem — and I think we all benefit from it. One of the things that you and I share about the joy of amplifying talent from everywhere is that, in fact, amplifying talent benefits all of us.

COWEN: Are the mediocre wordcels the biggest losers? Will Marc Andreessen go away happy, so to speak? [laughs]

HOFFMAN: Funny. I’d say the losers are people who are uncurious, who want to live in the past, who don’t care about learning the future at a broad base. We have a term for this: Luddites. Steve Jobs said that computers are the bicycles of the mind. We now have, with AI, the steam engines of the mind.

COWEN: Should a co-authored book with an LLM have First Amendment protections? Again, you have such a book: Impromptu.

HOFFMAN: I’d say the LLM shouldn’t have First Amendment, but I think co-authors — I can own the First Amendment protection. It’s what I say.

COWEN: But it can always hire a co-author for some nominal sum, where the co-author adds a few words. It’s a co-authored output. Once you allow the co-authored work through the door, anything can be co-authored, and who knows who did how much of the work. You’re granting First Amendment rights to LLMs, which maybe I’m fine with, but is that an implication?

HOFFMAN: Well, I don’t think you have to grant the rights to them. You have to have a person who is saying, “This is me. I own this.”

COWEN: Oh, but there’ll be a company that hires such people known for their obedience to go along with what the LLM wants. They’ll pay the person a quarter; the person will add three words to the thing.

HOFFMAN: Look, can you today buy someone’s First Amendment right of free speech? Yes, because you can pay them and give them the thing to say. That’s just the thing. That doesn’t necessarily mean the LLMs themselves have those rights.

On what LinkedIn can teach us about LLMs

COWEN: Your background with LinkedIn — which features of LLMs do you feel that’s given you a better or deeper appreciation for?

HOFFMAN: With LinkedIn?

COWEN: Well, you’re bringing a different conceptual matrix to everything, including LLMs. You’ve done LinkedIn for quite a while. Obviously, a key role in its creation. How does that make you see LLMs differently? I have my own hypothesis, but I want to hear yours.

HOFFMAN: One of the things that I did when I was doing this is, we’ve kicked off a product — which is, I believe, live now at LinkedIn — called Bizpedia, which is trying to provide an in-depth Wikipedia for all of the information that professionals might need. Anything like, what are the different career paths? What are the job skills? How would I do this particular job better? How do I learn it if I wanted to transition and get into it? It’s, again, that human amplification.

We couldn’t afford to do all that stuff, but we could get the LLMs to generate the baseline of it. Then we can use the human network to amplify it, and that was at least one kind of thing that I thought about with it. It obviously also has real implications in search and matching, like hey, which people should meet each other? Or I’m looking for someone to solve this particular business problem. It could be hiring, could be sales, could be partnering, could be information. Obviously, that all gets amplified.

COWEN: My answer would be this. There were uses of LinkedIn that might appear anodyne to a lot of snobby outside observers but are super useful to people who do them, and I think LLMs will be the same. People in poorer countries — they want it to write a business plan for them. The business plan will sound too McKinsey-like to please a lot of people who think they’re better than that, but in fact, it will be super useful.

HOFFMAN: Ah, yes, I think that’s true. And again, in the human amplification, I think it’s like, “Oh, look, it’ll write the business plan. I don’t need it to.” But you adding to it will make it a lot better.

COWEN: Yes, but I think also your LinkedIn background — it makes you more sympathetic to a partial subscription model, which maybe is the future for LLMs.

HOFFMAN: It’s definitely a future, for sure, and what percentage? I don’t know. Could be 20, could be 80.

COWEN: Do you think subscription is the economic future of LLMs for the next 10 years?

HOFFMAN: I think it’s definitely a future, but by the way, LLMs, as has already been announced, will be used to generate advertising.

COWEN: You’re allowed to use hindsight here, but as a talent scout yourself, how do you think of the strengths of Sam Altman in doing what he’s done?

HOFFMAN: I think this is an amazing gift to the world by Sam and the entire team. Sam, I think, assembles great people and helps them with high ambition. That’s one of the things that is under-described about Sam. I think that he also doesn’t try to take for himself the hero role. He catalyzes other people.

That’s one of the reasons I think he is also one of the good people to be leading the safety thing because, unlike a set of people who tend to have Messiah complexes — “It’s only safe if I bring it to you” — he goes and gets a number of people involved in doing it. I think that’s another strength.

And I think his ability to think super big has been helpful here. He frequently thinks something is going to be here tomorrow, where I disagree with him. I don’t think it’s going to be here even — he’s younger than I am — even in his lifetime, but that ambition is awesome.

COWEN: OpenAI — right now, I think they have about 375 employees. During the critical breakthrough period, of course, they had even fewer. Is that a new model of some kind? Or is it the old model, but it’s the alliance with Microsoft that makes everything work? Midjourney, I’ve heard, is 11 or 12 employees, which is crazy, right?

HOFFMAN: Yes. Look, Instagram, when Greylock funded it, was 13 employees. It is an amplification of the general software model, where you can have very small teams that produce things that are Archimedean levers that move the world. Now, you do need, in all of those cases, massive compute infrastructure, like AWS existed for Instagram, and so forth. You need that in order to make it happen, but a small team of software people can create amazing things.

COWEN: How is higher education going to change? And exactly who or what will do it?

HOFFMAN: As you know, higher education is very resistant to change.

COWEN: It actually is, believe it or not.


HOFFMAN: Yes, and yet, it should be changing. It should be reconceptualizing its way that it amplifies young people and launches them into the world. It should be providing LLMs that are tutors and helpful. It should be having LLMs that are helping professors do research and communicate with each other, like AI, and doing all this stuff. It should be embracing all of that with full force. And yet, most of it is, I think, ignoring what’s currently happening.

COWEN: Sure, but what actually breaks in the system because of that? Who rebels?

HOFFMAN: Well, it’s easy to read the tea leaves of the future in the past. Michael Crow at ASU doing amazing work. I think he will trailblaze. Ben Nelson at Minerva. We had him on our Possible podcast. I think these folks will eventually get other people to say, “This is where the world’s going, and it’s really good.”

COWEN: So students will switch to the institutions that are doing a better job, and you think that the network effects are not too strong to stop that?


COWEN: Here’s a general question, quite removed from the world of AI. I’ve discussed this with Patrick Collison a fair amount. It seems to me that after World War II, most of the Western world, maybe all of it — we simply stopped building beautiful neighborhoods. There are plenty of beautiful individual buildings, artworks, music, whatever, but actual complete neighborhoods as a whole — they’re now basically boring and mediocre, even if they’re very pleasant to live in. Why did that change? You can challenge the premise if you want.

HOFFMAN: I don’t know. If I have to speculate, it’s because it’s the general kind of industrialization that makes it, “Hey, figure out what is the thing that is closest to what most people want and produce a lot more of that.” Maybe it’s that.

COWEN: Medieval towns in Europe — they’re beautiful. There’s a certain sameness to them, but we admire the beauty all the more. It doesn’t seem that it’s sameness per se that’s lowering the aesthetic quality.

HOFFMAN: Yes. It could be production costs, and that’s part of the industrialization. Like now, how do we produce each one at a lower marginal cost? I would hope that what we will see . . .

For example, I was literally talking to someone last night who was creating a speakeasy for their house. What they did to work with their designer is, they went onto Midjourney, and they created a whole bunch of different images. The range of creativity — I hope that is what our future is, and that’s what I’m trying to beat the drum on to get us there.

COWEN: What is it about our current culture in America — putting aside politics, but culture — that concerns you most?

HOFFMAN: Culture. I would say it obviously ties to politics a little bit, but I think a culture that says we should have civil discourse to get to reasoned arguments and information — which obviously includes science — about what should be is the thing that is like what kicked us off from the Enlightenment and from the Renaissance. It’s important to keep that in our fundamental bones and genetics, and we are straying from it in very, very dangerous ways.

It’s not just on the crazy right, stuff with election denialism and all the rest. You obviously see that in wokeism and everything else, too. I think the two sides of this — both left and right — would be surprised for me to say, “In this respect, you both have the same disease.” We need to be talking about how do we reason our way to truth and understanding, and that’s super important.

COWEN: It seems that a lot of mental health indicators have become worse in this country, maybe all the more so for young people. Why is that?

HOFFMAN: I don’t fully know. I do think that we’ve certainly seen the indicators get worse. Is it because kids are always connected to a little bit more Lord of the Flies, and they’re always connected to other kids? Is it because they have the insecurities of being amplified, like cyberbullying following you into the home? Is it because the technology is not built the right way to try to reinforce mental health? I think we can do that.

Part of the thing is, how do we help provide support? You can use AI to help provide support on this. I think it’s a good thing to do. Whatever it is, it’s an important thing for us to work on.

COWEN: Now, we’re sitting here in the suburbs in Menlo Park, but will AI save the San Francisco tech scene? Or is that just going to vanish because of poor governance?

HOFFMAN: [laughs] I think in many ways, San Francisco is doing everything it can to self-immolate on the tech scene.

COWEN: But there are some major triumphs as of late, right? They’re in the city itself. They’re not on Sand Hill Road.

HOFFMAN: Yes, but it’s throughout the entire Valley. Yes, OpenAI is amazing. I do think that there are network effects throughout all of Silicon Valley. My advice to San Francisco, just as my advice to many other things, is try to channel the stuff that’s going on here to help all the rest.

For example, don’t try to resist the tech industry being in San Francisco. Try to channel it to helping with the various problems, whether it’s homelessness or crime or other kinds of things, and try to help those problems. For example, you could use cameras to help with a whole bunch of the crime problems.

COWEN: Ten or fifteen years ago, it seems we had so many tech CEOs — either in their 20s or possibly even teenagers — seeing considerable success. It doesn’t seem we have people in that age range anymore. Like Sam Altman — he’s, I think, 38, maybe 37. Why are CEOs older now, the more important ones? What’s changed?

HOFFMAN: I think we will see some new additional young folks, and look, the history of the current status quo is, CEOs tend to be older. I think it’s the younger CEOs that tend to be the new struggling companies. Remember, there’s not just Sam Altman. There’s also Patrick Collison, there’s also Brian Chesky, there are also those sorts of folks. They were CEOs when they were younger, and there will be a new crop, there. I am confident there will be a new crop of them before too long.

COWEN: But what if it’s the case, there’s less low-hanging fruit, the abilities you need are more synthetic, social networks are more important? This will favor the 35-year-olds rather than the 19-year-olds. Could that possibly be true?

HOFFMAN: It’s possibly true. There are definitely industrial cycles where you have to spend more time building up your position to get the capital credit to be entrepreneurial, bold, in charge, et cetera. There’ve definitely been cycles of that in history. I don’t think it’s impossible, but I do think it’s a little bit like you were gesturing, with small groups doing stuff with software. Because you can have small groups doing stuff with software, you’ll still have young CEOs, young founders. She or he will still be new blazing entrants into the world change leaders.

COWEN: The Bay Area as a whole — do you think that will remain as important as it’s been?

HOFFMAN: Categorically, yes.

COWEN: If there’s a tech start-up scene that is currently underrated — in the world or in the US — where would that be?

HOFFMAN: I’ll say something mildly provocative just because it’s entertaining — not Miami, since there’s this whole crew that’s like, “Miami is the future.” I think network effects of talent and everything else is much more here and other places. I think Austin is doing really interesting things. I think New York is doing interesting things. I think London’s doing interesting things. Surprisingly, I think there are interesting things in Paris and Berlin.

COWEN: Sweden — yes or no?

HOFFMAN: Sweden, yes. Obviously, Spotify and a bunch of other stuff. They punch way above their population weight, but since they’re a small population, they don’t tend to have a lot of immigration. I tend to think you need those to really get the flywheels going.

COWEN: Any hope for Poland plus Ukraine? Or you don’t see it yet?

HOFFMAN: I hope for it. I don’t see it yet. Obviously, there are other difficulties that are impeding right now.

COWEN: But if, say, it’s centered in Poland. People from Russia and Ukraine — they go to Poland. Poland becomes a new center with talent, basically, from three nations where —

HOFFMAN: Yes, totally possible.

COWEN: How much do you worry about low and declining fertility as a social problem for the West, for East Asia?

HOFFMAN: One thing that I thought about writing an essay on — maybe I’ll still do it — is, it isn’t, “Oh, God, the robots are coming for our jobs.” It’s, “Oh, God, can the robots get here soon enough?” Our whole system has been based upon the fact that we have a growing population so that the growing population can take care of the elderly. If you don’t have that, you have a serious reorientation of our entire society. China is going to run into that in a huge way, and so forth.

Japan’s probably trailblazing. You see a little bit of that with the care and robots and everything else. I think that we desperately need the amplification in order to not create a massive burden for our children if that trend continues.

COWEN: Let’s say we can afford it because of something like robots or AI. Doesn’t that, in a sense, make the problem worse? We feel less of an emergency. South Korea — they’re at 0.8. Just keep the clock on ticking. Eventually, they basically don’t have people left. How can that work out well for us? And the fact that someone pays the bill for our collective extinguishing of the human condition doesn’t reassure me.

HOFFMAN: Obviously, you can do the math and go to diminishing zero. I think we’ll do various forms of incentive stimulus. I think we can get it back to at least a replacement rate. Among other things, we might say, “Look, actually, being a parent is a paid job.” Just because we think that that’s an important thing as a society, and we can afford that from the productivity increases we’re getting from AI and robotics.

COWEN: So, we use the robot surplus, in essence, to pay families for that to be the second or third job in the family.

HOFFMAN: Yes, exactly.

COWEN: Politically, you think that will be super popular, people hate it, or — ?

HOFFMAN: I think we could get to a place where it would be popular. Right now, it’d be considered science fiction and strange. But if our replacement rate keeps going down, then I think people will say, “Oh, no, that makes sense.”

COWEN: A lot of science fiction has come through.

HOFFMAN: Yes. This was the reason that you and I both love science fiction and trade recommendations on a regular basis.

COWEN: Asimov’s Three Laws — how good were they?

HOFFMAN: I think they were really good, although they were out of conceptualization for a target. If I were to update them, and it’s a little bit like — to reveal my nerdishness — Giskard’s zeroth law. I think that what you really want in it is to parallel almost a Buddhist sense of the importance of life and sentience. That’s the kind of thing that you want if you’re creating really autonomous intelligences. I think the kind of Uncle Tom, if it really is a totally autonomous being — hence being careful about going into it — a new form of robot slaves is perhaps not ultimately where humanity would want to be.

COWEN: There’s not enough stress in them, I think, in what the robots are obliged to believe. A robot is free to believe something crazy and then act on it. That seems to me the biggest weakness of the law, at least what you see in the stories.

HOFFMAN: Hence the alignment with human interests around, how do you amplify the quality and value of life, is, I think, a very good thing.

COWEN: What’s an underrated science fiction novel that maybe our listeners, readers don’t know about?

HOFFMAN: There are lots. Another one that I’ve been rereading recently — because I think it’s good fun but also raises good questions in a simple, fun format — is Martha Wells’s Murderbot series. I think it, in itself, does not really address those questions very directly, but it raises them in good ways, like persons and being a thing versus a person and so forth, within a kind of a classic sci-fi romp.

COWEN: I’ve been rereading Ursula Le Guin, The Dispossessed. I’m amazed how anti-utopian and almost right-wing it is. Utopian society is a kind of nightmare.

HOFFMAN: Yes. It’s partially because we need to have diversity in the human species. It’s part of how do we enable as much diversity while not allowing . . . It’s that the diverse creative expression part of freedom of speech, and why it is valuable, is that diversity of craziness that also creates genius.

COWEN: What’s a game you’ve been playing more of late, and why?

HOFFMAN: I haven’t really had time to play games because the AI stuff is occupying a total amount of time. I have a stack of games, without their shrink wrap taken off, that I’m hoping to get to.

COWEN: I find the AI stuff — it’s totally wrecked my calendar. I had a year planned out that I could just do a whole bunch of other things. Now, every day you have to keep up with AI, you have to learn. It’s like, “This doesn’t work anymore.” Throw up my hands, and I feel a bit behind on everything.

HOFFMAN: Yes, although by the way, there will be a chatbot for that.

COWEN: [laughs] That’s good. What’s a nonobvious problem we should be worrying about more?

HOFFMAN: I think because so much of the discourse in the press is around the macro things. Is AI in the hands of bad human actors? There are a range of bad human actors, so I think that’s really important.

I think also the question around . . . People tend to go, “Oh, wait a minute, the people who have the AI will be amplified.” The most natural thing is to pursue where the money is. How do we get AI in the hands of lower-income students and school districts and all the rest to make sure that it’s there and provisioned? It’s one of the things that I love about OpenAI, the accessibility of ChatGPT. But how do we get as broadly enabled as we can is, I think, another important one.

COWEN: Let’s say you’re advising a small but tech-advanced nation. Singapore, Israel would be two options. Would you tell them they should build their own LLMs? It will cost them a lot per capita, but they’ll have their own LLMs.

HOFFMAN: I don’t think they need to, but I think they should get involved and perhaps work with the providers of LLMs to make sure that there are LLMs that fit their needs. That doesn’t necessarily need to be that they build their own. They say, “Hey, we need to make sure that we have LLM provisioning for our companies and our industry and our citizens. Okay, let’s make sure that happens, whether it’s, we spend billions of dollars to build one ourselves.” They could do that. Certainly nothing bad in doing that, but they should make sure that their industries and their citizens are provisioned.

COWEN: We have a strategic petroleum reserve, for better or worse. Should Israel have a strategic GPU reserve? Don’t nations such as the US get too much leverage over Israel if they’re dependent on us for models? Right now, OpenAI’s open, but OpenAI can’t control how our government might regulate it. Our government might decide to use it as a foreign policy tool. Hand it out to countries that cooperate, deny it to countries that don’t.

HOFFMAN: I think it’s an important gesture, the dependency. But by the way, once you have the LLM, as it were, on the soil, governed by your laws, the ability of the US to do that is much less. That’s the reason why some people will say, “We need to build the computers.” I do think depth of compute is a strategic advantage. It’s an important thing to take a look at, and you may want to say, “Hey, we want to make sure there’s a certain amount of compute that’s onshore, that is then aligned with the interests of our country, our society, our industries, et cetera.”

Also, by the way, there’s training and there’s running. If you have the AI models and you’re running them, and you have that sufficiently within your society, your strategic dependency would be a lot less. So, I think you have to plot that strategy with some care. I do think it’s an important strategy to be paying attention to. I think, for example, we as the US — part of the thing I like about the world order of the US is, we sometimes do stuff that throws too much stuff too much to our advantage. That’s a problem, but we also try to provision a lot, like we try to raise the rest of the world, and I think we should continue to do that.

COWEN: As you know, in EU law, there’s a right to be forgotten, but that is arguably inconsistent with current LLMs. You can force a new training run by saying, “You’ve got to take me out of the current system.” But a new training run costs a lot of money. To have lone individuals raising their hands and saying, “Oh, the model has to forget me.” That’s just not going to work.


COWEN: So, legally, where do you think the EU will end up on all this?

HOFFMAN: There’s smart EU, dumb EU, and which one is up to them. Smart EU is to say, “What we need to do is, we need to be dealing with the function of what is the culture in society.” So, we say, “We want to make sure that these AI tools have the right judiciousness in being asked about individuals. That’s our particular culture.” So, we say, “Okay, you have to at least have a MetaBot that could interrupt the query in some way.” That’s what that would be if that was our expression of culture and being tech-forward in how you do it.

The bad one would be to say, no, no — a little bit like what the Italians are doing with ChatGPT. “Well, you can’t do ChatGPT.” You’re disadvantaging your entire society and all of your citizens. You’re being Luddites with the loom and the steam engine. Be innovative into the future — yes, with European values and European concerns and so forth — but it’s the steering into the future versus trying to enshrine the past. That would be the less smart EU.

COWEN: You’re not sure which of those will happen.

HOFFMAN: I hope they pick the smart one. I try as much as I can. I think European values and insights in the future are something I learn from and value. I want them to contribute that positively into the future.

COWEN: Will ChatGPT through VPNs just dominate China, at least for some number of years? Or will they somehow force people away from doing that? Because you’re getting Western Anglosphere information all the time, right? Including about Tiananmen, China, everything else.

HOFFMAN: They have demonstrated with what they call the Golden Shield that they are committed to creating an alternative internet and an alternative series. I think they will be able to do that, and they even have it on the control through VPNs. We, as Westerners, go to China, and we say, “Oh, it works just fine with VPNs.” That’s because they’re allowing our VPNs through, whereas local VPNs — they squash them.

COWEN: But I’ve seen some numbers. Maybe they’re not reliable, but they seem to indicate there are more ChatGPT users in China than in the US. Now, they have a larger population, but still, that’s a major effect that probably is happening now. Do they just tolerate that and let everyone query it about “Taiwan is a country” or whatever?

HOFFMAN: In my view, they should, but I don’t think they will.

COWEN: It should, but what will they do?

HOFFMAN: What’s more is, I think because these models have less ability to put controls in them, I think that will cause problems, even in onshore development.

COWEN: And there’ll be open source in China, right?


COWEN: Once that’s more of a thing. So, they’ll just lose their attempt to censor their own society? Or you think they’ll somehow triumph over everything?

HOFFMAN: They’re very smart, and they’re very committed to the censorship, so I think it’ll create additional problems for them in so doing, but I think they’ll figure out how to do it.

COWEN: Before my last question, just to repeat, Reid’s new book, co-authored with GPT-4, is Impromptu: Amplifying Our Humanity through AI, a Wall Street Journal bestseller. Finally, last question, Reid: what will you do next, other than talk to dolphins?

HOFFMAN: [laughs] AI is going so fast. There’s a bunch of things that we didn’t cover in Impromptu, so I actually think we will do another book and set a content around AI, possibly within this calendar year, which will be pretty amazing.

COWEN: Reid Hoffman, thank you very much.

HOFFMAN: Thank you.

Reid’s podcast Possible is back this summer with a three-part miniseries called “AI and The Personal,” which launched on June 21st. Featured guests use AI, hardware, software and their own creativity to better people’s daily lives. Subscribe to get the series.

Photo credit: David Yellen