Sam Altman makes his second appearance on the show to discuss how he’s managing OpenAI’s explosive growth, what he’s learned about hiring hardware people, what makes roon special, how far they are from an AI-driven replacement to Slack, what GPT-6 might enable for scientific research, when we’ll see entire divisions of companies run mostly by AI, what he looks for in hires to gauge their AI-resistance, how OpenAI is thinking about commerce, whether GPT-6 will write great poetry, why energy is the binding constraint to chip-building and where it’ll come from, his updated plan for how he’d revitalize St. Louis, why he’s not worried about teaching normies to use AI, what will happen to the price of healthcare and hosing, his evolving views on freedom of expression, why accidental AI persuasion worries him more than intentional takeover, the question he posed to the Dalai Lama about superintelligence, and more.
Recorded live at the Progress Conference, hosted by the Roots of Progress Institute, October 17th, 2025. Special thanks to Big Think for the video production.
Watch the full conversation
Read the full transcript
TYLER COWEN: Hello, Sam. Happy to do this with you.
SAM ALTMAN: Excited to do it again.
COWEN: Now, the last two months or so, there have been so many deals involving OpenAI, and I’m not even talking about the globetrotting. A lot of them are local, or there’s new product features such as Pulse. Presumably, you were productive, to begin with. How is it that you managed to up your productivity to get all this done?
ALTMAN: I don’t think there’s one single takeaway other than I think people almost never allocate their time as well as they think they do. As you have more demands and more opportunities, you find ways to continue to be more efficient. We’ve been able to hire and promote great people, and I delegate a lot to them and get them to take stuff on. That is the only sustainable way I know how to do it.
I do try to make sure we increasingly, as what we need to do comes into focus — and there’s, as you mentioned, a lot of infrastructure that needs to be built out right now — I do try to make sure I understand what the core thing for us to do is. It has, in some sense, simplified, and it’s very clear what we need to do. That’s been helpful. I don’t really know. I guess another thing that’s happened is more of the world wants to work with us, so deals are quicker to negotiate.
COWEN: You’re doing much more with hardware or matters that are hardware-adjacent. How is hiring or delegating to a good hardware person different from hiring good AI people, which is what you started off doing?
ALTMAN: You mean consumer devices or chips?
COWEN: Both.
ALTMAN: One thing that’s different is that cycle times are much longer. The capital is more intense. The cost of screen up is higher, so I like to spend more time getting to know the people before saying, “Okay, you’re just going to do this, and I’ll trust that it’ll work out.” Otherwise the theory is the same. You try to find good, effective, fast-moving people. Get clear on what the goal is and just let them go do it.
COWEN: I visited Nvidia earlier this year. They were great. They were great to me. They’re super smart, but it just felt so different from walking the floor of OpenAI. I’m sure you’ve been to Nvidia. People read Twitter less. At least on the surface, they’re less weird. What is that intangible difference in the hardware world that one has to grasp to do well in it?
ALTMAN: Look, I don’t know if this is going to turn out to be a good sign or a bad sign, but our chip team feels more like the OpenAI research team than a chip company. I think it might work out phenomenally well.
COWEN: You’re extending the model —
ALTMAN: We are.
COWEN: — of your previous hires to hardware.
ALTMAN: With some risk, but we are.
COWEN: There’s this fellow on Twitter. His name is roon. He’s become quite well known. What is it that makes roon special to you?
ALTMAN: He’s like a very lateral thinker. You can start down one path and jump somewhere completely else, but keep going down, stay on the same trajectory, but in some very different context that’s unusual. He’s great at phrasing observations in an interesting, useful, whatever way, and that makes him fun and quite useful to talk to. I don’t know. He brings together a lot of skills that don’t often exist in one person’s head.
COWEN: How does that shape what you have him work on? You see this about him, and then you think, “Ah, roon should do X.”
ALTMAN: I very rarely get to have anybody work on anything. One thing about researchers is they’re going to work on what they’re going to work on, and that’s that.
COWEN: Someone put an essay online, and it said, in all the time they worked at OpenAI, they hardly ever sent or received an email, that so many things were done over Slack. Why is that? What’s your model of why email is bad and Slack is good for OpenAI?
ALTMAN: I’ll agree, email is bad. I don’t know if Slack is good. I suspect it’s not. I think email is very bad. The threshold to make something better than email is not high, and I think Slack is better than email. We have a lot of things going on at the same time, as you observed, and we have to do things extremely quickly. It’s definitely a very fast-moving organization. There are positives about Slack, but there’s also, I dread the first hour of the morning, the last hour before I go to bed, where I’m just dealing with this explosion of Slack, and I think it does create a lot of fake work.
I suspect there is something new to build that is going to replace a lot of the current office, productivity suite, whatever you think of docs, slides, email, Slack, whatever, that will be the AI-driven version of all of these things, not where you tack on the horrible — you accidentally click the wrong place and it tries to write a whole document for you or summarize some thread or whatever, but the actual version of you are trusting your AI agent and my AI agent to work most stuff out and escalate to us when necessary. I think there is probably finally a good solution for someone to make within reach.
COWEN: How far are you from having that internally? Maybe not a product for the whole world, that it’s in every way tested, but that you would use it within OpenAI?
ALTMAN: Very far, but I suspect just because we haven’t made any effort to try, not because the models are that far away.
COWEN: Since talent, time, human capital is so valuable within your company, why shouldn’t that be a priority?
ALTMAN: Probably we should do it, but people get stuck in their own ways of doing things, and a lot of stuff is going very well right now, so there’s a lot of activation energy for a big new thing.
COWEN: What is it about GPT-6 that makes that special to you?
ALTMAN: If GPT-3 was the first moment where you saw a glimmer of something that felt like the spiritual Turing test getting passed, GPT-5 is the first moment where you see a glimmer of AI doing new science. It’s very tiny things, but here and there someone’s posting like, “Oh, it figured this thing out,” or “Oh, it came up with this new idea,” or “Oh, it was a useful collaborator on this paper.” There is a chance that GPT-6 will be a GPT-3 to 4-like leap that happened for Turing test-like stuff for science, where 5 has these tiny glimmers and 6 can really do it.
COWEN: Let’s say I run a science lab, and I know GPT-6 is coming. What should I be doing now to prepare for that?
ALTMAN: It’s always a very hard question. Even if you know this thing is coming, if you adapt your —
COWEN: Let’s say I even had it now, right? What exactly would I do the next morning?
ALTMAN: I guess the first thing you would do is just type in the current research questions you’re struggling with, and maybe it’ll say, “Here’s an idea,” or “Run this experiment,” or “Go do this other thing.”
COWEN: If I’m thinking about restructuring an entire organization to have GPT-6 or 7 or whatever at the center of it, what is it I should be doing organizationally, rather than just having all my top people use it as add-ons to their current stock of knowledge?
ALTMAN: I’ve thought about this more for the context of companies than scientists, just because I understand that better. I think it’s a very important question. Right now, I have met some orgs that are really saying, “Okay, we’re going to adopt AI and let AI do this.” I’m very interested in this, because shame on me if OpenAI is not the first big company run by an AI CEO, right?
COWEN: Just parts of it. Not the whole thing.
ALTMAN: No, the whole thing.
COWEN: That’s very ambitious. Just the finance department, whatever.
ALTMAN: Well, but eventually it should get to the whole thing, right? So we can use this and then try to work backwards from that. I find this a very interesting thought experiment of what would have to happen for an AI CEO to be able to do a much better job of running OpenAI than me, which clearly will happen someday. How can we accelerate that? What’s in the way of that? I have found that to be a super useful thought experiment for how we design our org over time and what the other pieces and roadblocks will be. I assume someone running a science lab should try to think the same way, and they’ll come to different conclusions.
COWEN: How far off do you think it is that just, say, one division of OpenAI is 85 percent run by AIs?
ALTMAN: Any single division?
COWEN: Not a tiny, insignificant division, mostly run by the AIs.
ALTMAN: Some small single-digit number of years, not very far. When do you think I can be like, “Okay, Mr. AI CEO, you take over”?
COWEN: CEO is tricky because the public role of a CEO, as you know, becomes more and more important.
ALTMAN: Let’s say if I can pretend to be a politician, which is not my natural strength, an AI can do it, too. Let’s say I stay involved for the public-facing whatever. Just actually making the good decisions, figuring out what to do.
COWEN: I think you’ll have billion-dollar companies run by two or three people with AIs, I don’t know, two and a half years. I used to think one year, but maybe I’ve put it off a bit. I’m not more pessimistic about the AI. Maybe I’m more pessimistic about the humans. What’s your forecast?
ALTMAN: I agree on all of those counts. I think the AI can do it sooner than that. I think this is a good thing for society and a good thing for the future, not a bad one. People have a great deal higher trust in other people over an AI, even if they shouldn’t, even if that’s irrational. The AI doctor is better, but you want the human, whatever. I think it may take much longer for society to get really comfortable with this and for people in the organization to get really comfortable with this. On the actual decision-making for most things, maybe the AI is pretty good pretty soon.
COWEN: You’re hiring a lot of smart people. Do you ask yourself, “What are the markers of how AI-resistant this very smart person will be?” Do you have an implicit mental test for that, or you just hire smart people and hope it’s all going to work out later?
ALTMAN: No, I do ask questions about that.
COWEN: People will just lie, right? They know they’re talking to OpenAI. What do you actually look for in them?
ALTMAN: A big one is how they use AI today. The people who still are like, “Oh, yes, I’d use it for better Google search and nothing else,” that’s not necessarily a disqualifier, but that’s like a yellow flag. People who are seriously considering what their day-to-day is going to look like in three years, that’s a green flag. A lot of people aren’t. They’re like, “Oh, yes, probably it’s going to be really smart.”
COWEN: Do you think scientific labs might get GPT-6 this year?
ALTMAN: Not this year.
COWEN: Not this year. Here’s a very difficult question. As you know, both you and I were fans of nuclear power, but we also know, the insurance for nuclear power plants is provided by the government. The plants might be quite safe, but people worry. They’re Nervous Nellies. There’s a lot of parties involved. The federal government does the insurance. Do you worry that the future holds the same for AI companies, or the Feds are your insurer, and how do you plan for that? Again, even if AI is pretty safe, as with nuclear power, people are Nervous Nellies. How will you insure everything?
ALTMAN: At some level, when something gets sufficiently huge, whether or not they are on paper, the federal government is kind of the insurer of last resort, as we’ve seen in various financial crises and insurance companies screwing things up. I guess, given the magnitude of what I expect AI economic impact to look like, I do think the government ends up as the insurer of last resort, but I think I mean that in a different way than you mean that, and I don’t expect them to actually be writing the policies in the way that maybe they do for nuclear.
COWEN: There’s a big difference between the government being the insurer of last resort and the insurer of first resort. Last resort’s inevitable, but I’m worried they’ll become the insurer of first resort, and that I don’t want.
ALTMAN: I don’t want that either. I don’t think that’s what will happen.
COWEN: What we’re seeing with Intel, lithium, rare earths is, the government is becoming an equity holder. Again, not of last resort, sort of second or third resort. I don’t mean this as a comment about the Trump administration. I think this is something we might be seeing in any case or see in the future after Trump is gone. How do you plan for OpenAI, knowing that’s now a thing on the table in the American economy?
ALTMAN: Look, there are all these ways where you can imagine that I put almost no probability mass onto the world where no one has any meaning in the post-AGI world because the AI is doing everything. I think we’re really great at finding new things to do, new games to play, new ways to be useful to each other, to compete, to get fulfilled, whatever, but I do put a significant probability that the social contract has to change significantly.
I don’t know what that will look like. Can I see the government getting more involved there and thus having some strong opinions about AI companies? I can totally see that. We don’t live our lives that way. We just try to work with capitalism as it currently exists. I believe that that should be done by the companies and not the government, although we’ll partner with the government and try to be a good collaborator. I don’t want them writing our insurance policies.
COWEN: Now, I did a trip through France and Spain this summer with my wife. Every hotel we booked, other than the first one, well, we found it through GPT-5. We didn’t book it through GPT. Almost every meal we ate, right? You didn’t get a dime for this. I’m telling my wife, “Well, this just seems wrong,” right? What is the new world going to look like soon enough? How will that work?
ALTMAN: To zoom out even before the answer, one of the unusual things we noticed a while ago, and this was when it was a worse problem, ChatGPT would consistently be reported as a user’s most trusted technology product from a big tech company. We don’t really think of ourselves as a big tech company, but I guess we are now.
That’s very odd on the surface because AI is the thing that hallucinates, AI is the thing with all the errors, and this was when that was much more of a problem. There is a question of why. Ads on a Google search are dependent on Google doing badly. If it was giving you the best answer, there’d be no reason ever to buy an ad above it. You’re like, “That thing is not quite aligned with me.”
ChatGPT, maybe it gives you the best answer, maybe it doesn’t, but you’re paying it, or hopefully, all are paying it, and it’s at least trying to give you the best answer. That has led to people having a deep and pretty trusting relationship with ChatGPT. You ask ChatGPT for the best hotel, not Google or something else. If ChatGPT were accepting payment to put a worse hotel above a better hotel, that’s probably catastrophic for your relationship with ChatGPT.
On the other hand, if ChatGPT shows you it’s guessed the best hotel, whatever that is, and then if you book it with one click, takes the same cut that it would take from any other hotel, and there’s nothing that influenced it, but there’s some sort of transaction fee, I think that’s probably okay. With our recent commerce thing, that’s the spirit of what we’re trying to do. We’ll do that for travel at some point.
COWEN: I’m not worried about the payola issue, but let me tell you my worry, and that is there may be a tight cap on the commission you can charge, because we’re now in a world, say where there’s agents, and someone finds the best hotel through GPT-7 or whatever, and then they just talk to their computer or their pendant, and they go to some stupider service, but the stupider service is an agent that books very cheaply, and they only really have to pay OpenAI a commission equal to what the stupidest service will charge.
ALTMAN: One thing I believe in general related to this is that margins are going to go dramatically down on most goods and services, including things like hotel bookings. I’m happy about that. I think there’s a lot of taxes that just suck for the economy, and getting those down should be great all around. I think that most companies like OpenAI will make more money at a lower margin.
COWEN: Do you worry about the discrepancy between the fixed upfront cost of making yours the smartest model compared to the very cheap cost of all the competing agent has to do is book it for someone, and how do you use the commissions to pay for making the model smarter, in essence?
ALTMAN: I think the way to monetize the world’s smartest model is certainly not hotel booking.
COWEN: But you want to do it, nonetheless.
ALTMAN: No. I want to discover new science and figure out a way to monetize that that you can only do with the smartest model. There is a question of, many people have asked, “Should OpenAI do ChatGPT at all? Why don’t you just go build AGI? Why don’t you go discover a cure for every disease, nuclear fusion, cheap rockets, the whole thing, and just license that technology?”
It is not an unfair question because I believe that is the stuff that we will do that will be most important and make the most money eventually, but my story, my most likely story about how this works, how the world gets dramatically better, is we put a really great superintelligence in the hands of everybody. We make it super easy to use. It’s nicely integrated. We make you beautiful devices.
We connect it to all your services. It gets to know you over your life. It does all the stuff for you. We invest in infrastructure and chips, and energy, and the whole thing to make it super abundant and super cheap. Then you all figure out how the world gets way better. Maybe some people will only ever book hotels and not do anything else, but a lot of people will figure out they can do more and more stuff and create new companies and ideas and art and whatever.
Maybe ChatGPT and hotel booking and whatever else is not the best way we can make money. In fact, I’m certain it’s not. I do think it’s a very important thing to do for the world. I’m happy for OpenAI to do some things that are not the economic-maxing thing.
COWEN: Now, you have a deal in the works with Walmart that people can use GPT, ask it questions, “What should I buy at Walmart?” and then they can buy it at Walmart, and you and Walmart have some arrangement. Do you think Amazon will fold and join that, or are they going to fight back and try to do their own thing?
ALTMAN: I have no idea. If I were them, I would fight back.
COWEN: You would fight back?
ALTMAN: I think so. Yes.
COWEN: Ads. How important a revenue source will ads be for OpenAI?
ALTMAN: Again, there’s a kind of ad that I think would be really bad, like the one we talked about. There are kinds of ads that I think would be very good or pretty good to do. I expect it’s something we’ll try at some point. I do not think it is our biggest revenue opportunity.
COWEN: What will the ad look like on the page?
ALTMAN: I have no idea. You asked a question about productivity earlier. I’m really good about not doing the things I don’t want to do.
COWEN: That’s something you don’t want to do?
ALTMAN: We have the world expert thinking about our product strategy. I used to do that. I used to spend a lot of time thinking about product. Now she’s much better at it than me. I have other things to think about. I’m sure she’ll figure it out.
COWEN: Whether or not you agree with it, what is the best this-is-not-a-bubble argument. Is it just the insatiable demand for compute?
ALTMAN: There’s a lot of arguments I’m tempted to give, but I think the intellectually most interesting one is we have no idea how much past human-level intelligence can go and what you can do with it as it does. There’s all the arguments that everyone has made. The one I would like to see people talk about much more is “How are you even supposed to think about vastly superhuman intelligence and the economic impacts of that?”
COWEN: Now, OpenAI is in talks with Saudi Arabia, with UAE. Let’s take the most optimistic scenario for how all that goes. What is it that top OpenAI management needs to know or understand about those countries, and how is it you learn it?
ALTMAN: Well, it would depend on what we were doing with them. Putting data centers in a country or taking investment from a country or deploying commercial services would be very different than a set of other collaborations we could imagine. But generally speaking, to put data centers in a country, what we need to understand is who’s going to run it. We don’t operate our own data centers, but Microsoft or Oracle, or somebody else.
What workload are we going to put there? What model weights are we going to put there? What are the security guarantees going to look like? We do want to build data centers around the world, with lots of countries, but for this question, which is the main thing we deal with other countries for, those are the kinds of questions. If we were, which we don’t have current plans, if we were developing a custom model for some country, we’d have a whole bunch more questions.
COWEN: They have different legal codes, different expectations from a deal. I’m not saying it’s in a bad way. It’s just quite different, right? You do the Jared Kushner thing? “Here’s the 25 books I read?” You sit down and you ask GPT-6 how to understand this culture, or you bring in three experts?
ALTMAN: We bring in experts.
COWEN: You bring in experts.
ALTMAN: We talk to the US government a lot. We bring in experts. Again, if we’re building a data center that a very trusted partner is going to operate, we know what the workload is, and it’s being built like a US embassy or US military base, we have a very different set of questions than if we were doing other things, which we have not yet decided to do, and we’d bring in more experts for.
COWEN: Those are quite intangible forms of knowledge, often. How good do you think GPT-6 is at teaching you those things, or you still need the human experts to come in? Because you could just ask your own model, right?
ALTMAN: I don’t think GPT-6 will have those intangibles. It might surprise us, but that’d be very unexpected if I was like, “Oh, don’t need to talk to experts anymore.”
COWEN: Do you have an evaluation for that in the works?
ALTMAN: Actually, for something very close to that, we do. I don’t want to pre-announce it. That class of stuff, yes, we do have an evaluation.
COWEN: You do?
ALTMAN: Yes.
COWEN: Yes. How good will GPT-6 be at poetry?
ALTMAN: How good do you think GPT-5 is at poetry?
COWEN: Not that good. It’s not what I want it for, so that’s not a complaint. My guess is, in a year, you’ll have some model that can write a poem as good as the median Pablo Neruda poem, but not the best.
ALTMAN: I was going to say, I don’t want to say GPT, whether it’s 6 or 7, but I think we will get to something where you will say, “This is not a long way to the very best, but this is like a real poet’s okay poem.”
COWEN: In my view, there’s a big gap between a Neruda poem that’s a 7 on a scale of 1 to 10 and one that’s a 10. I’m not sure you’ll ever reach the 10. I think you’ll reach the 8.8 within a few years.
ALTMAN: I think we will reach the 10, and you won’t care.
COWEN: Who won’t care?
ALTMAN: You won’t care.
COWEN: I’ll care. I promise.
ALTMAN: You’ll care in terms of the technological accomplishment, but in terms of the great pieces of art and emotion and whatever else produced by humanity, you care a lot about the person or that a person produced it. It’s definitely something for an AI to write a 10 on its technical merits.
My classic example of this is, the greatest chess players don’t really care that AI is hugely better than them at chess. It doesn’t demotivate them to play. They don’t really care that they are. They really care about beating the other human, and they really get obsessed with that dude sitting across from them. The fact that the AI is better, they don’t care. Watching two AIs play each other, not that fun for that long.
COWEN: Let me tell you my worry about reaching the 10. Evaluations rely a lot on these rubrics. The rubrics will become good enough to produce very good poems, but maybe there’s something about the 10 poem that stands outside the rubric. If you’re just training on rubrics, rubrics, rubrics, it might in a way be counterproductive for reaching the 10.
ALTMAN: Evals can rely on a lot of things, including when you call upon the 10 and when you don’t.
COWEN: Sure.
ALTMAN: You can read a bunch in the process and provide some real-time signal.
COWEN: Say we have no human poets today writing 10s, and we’re asking those same people to judge and grade the GPTs. I’m worried. Again, I think it will be fine. To me, we’re talking about a 9, not a 10. You don’t have William Wordsworth working for OpenAI.
ALTMAN: This gets to a very interesting thing, which is, let’s say you can’t write a 10, but you can decide when something is a 10. That might be all that we need.
COWEN: Maybe humanity only decides collectively what’s a 10, and there’s something a little mysterious and history-laden about that process.
ALTMAN: Okay, but still, we can do it. Now, maybe our decision is not very good because it is history-related and it does drift over time, and some things we all agree are great, the next generation decides they’re not, whatever, but if whatever process humanity has to determine what poem is a 10, you could imagine that providing some sort of signal to an AI. Now that, again, if you know it’s an AI, maybe you don’t care. We see this phenomenon with AI art, but yes.
COWEN: To the extent you end up building your own chips, what’s the hardest part of that?
ALTMAN: Man, that’s a hard thing all around.
COWEN: That’s the hardest.
ALTMAN: There’s no easy part of that.
COWEN: Yes, no easy part of that. Well, Jonathan Ross said it’s just keeping up with what is new.
ALTMAN: No, I’ll tell you why. People talk a lot about the recursive self-improvement loop for AI research, where AI can help researchers, maybe today, write code faster, eventually do automated research, and this thing is well understood, very much discussed. Very little discussed or relatively little discussed are the hardware implications of this: robots that can build other robots, data centers that can build other data centers, chips that can design their own next generation. There’s many hard parts, but maybe a lot of them can get much easier. Maybe the problem of chip design will turn out to be a very good problem for previous generations of chips.
COWEN: The stupidest question possible: Why don’t we just make more GPUs?
ALTMAN: Because we need to make more electrons.
COWEN: What’s stopping that? What’s the ultimate binding constraint?
ALTMAN: We’re working on it really hard.
COWEN: If you could have more of one thing to have more compute, what would the one thing be?
ALTMAN: Electrons.
COWEN: Electrons. Just energy. What’s the most likely short-term solution for that?
ALTMAN: Short-term, natural gas.
COWEN: Easing, not full solution, but easing of the constraint.
ALTMAN: Short-term, natural gas.
COWEN: In the American South?
ALTMAN: Or wherever. Long-term, it will be dominated, I believe, by fusion and by solar. I don’t know what ratio, but I would say those are the two winners.
COWEN: You’re still bullish on fusion?
ALTMAN: Very much, and solar.
COWEN: Do you worry that, as long as it’s called nuclear power, even if it works —
ALTMAN: Did I say the word nuclear?
[laughter]
COWEN: No, you didn’t, but other people will. That people just won’t want it, getting back to the irrationality point in the insurance?
ALTMAN: You’re the economist, not me, but I think there is some price point at a given level of safety where the demand for this will be overwhelming. If this is the same price as natural gas, maybe it’s unfortunately hard. If it’s one-tenth the price, I think we could agree it would happen very fast. I don’t know what the cut point is between.
COWEN: Do you ever worry there’s some scenario where, ultimately, superintelligence doesn’t need that much compute, and in some funny way, by investing in compute, you’re betting against progress over a 30-year time horizon?
ALTMAN: The related thing, in the same way that people always want more energy if it’s cheaper, I think people will always want more compute if it’s cheaper. Even if you can make incredibly smart models with much less compute, which I’m sure you can, the desire to consume in all sorts of new ways and do more stuff with more abundant intelligence, I’ll take that bet every day. The related thing I worry about is that there is a huge phase shift on how we do compute, and we’re all chasing a dead-end paradigm. That would be bad.
COWEN: What would that look like?
ALTMAN: I don’t know. We all switch to optical compute, like full-on optical compute or something.
COWEN: And just have to spend a lot of money all over again?
ALTMAN: Yes. Well, not on all of it. The energy is the energy, but yes, on everything else.
COWEN: Now, I love Pulse. Why don’t I hear more about Pulse, or do you think there is a lot of chatter out there?
ALTMAN: People love Pulse, but it is only available to our Pro users right now, which is not that many. Also, we’re not giving much per day to users. We will change both of those things. I suspect when we roll it out to Plus, you will hear about it a lot more, but people do love it. It gets great reviews.
COWEN: What do you use Pulse for?
ALTMAN: There are only two things in my life right now. There’s my family and work. Clearly, this is what I talk to ChatGPT about because I get a lot of stuff about that. I get the odd “New hypercar came out” or “Here’s a great hiking trail” or whatever, but it’s mostly those two things. It’s great for both of those.
COWEN: I’d just like to do a brief interlude on your broader view of the world and just see how I should think about how you think. People in California, they have a lot of views on their own health, some of which to me sound nutty. What do you think is your nuttiest view about your own health? That you’re going to live forever, that the seed oils are bad, or what is it, or do you not have any?
ALTMAN: When I was less busy, I was more disciplined on health-related stuff. I didn’t have crazy views, but I was like, I kind of ate healthy. I didn’t drink that much. I worked out a lot. I tried a few things here and there. I once ended up in a hospital for trying semaglutide before it was cool, that kind of stuff. I now do basically nothing.
COWEN: You just live family life.
ALTMAN: I eat junk food. I don’t exercise enough. It’s like a pretty bad situation. I’m feeling bullied into taking this more seriously again.
COWEN: Yes, but why eat junk food? It doesn’t taste good.
ALTMAN: It does taste good.
COWEN: Compared to good sushi? You could afford good sushi. Someone will bring it to you. A robot will bring it.
ALTMAN: Sometimes late at night, you just really want that chocolate chip cookie at 11:30 at night, or at least I do.
COWEN: Yes. Do you think there’s any kind of alien life on the moons of Saturn? Because I do. That’s one of my nutty views.
ALTMAN: I have no opinion on the matter.
COWEN: No opinion on the matter.
ALTMAN: I don’t know.
COWEN: Yes, that’s a way of passing the test. What do you think about UAPs? Do you think there’s a thing?
ALTMAN: I think something’s going on there. I don’t know.
COWEN: You think something is going on there.
ALTMAN: Again, I have an opinion that there is something that I would like an explanation for. I kind of doubt it’s little green men. I extremely doubt it’s little green men, but I think someone’s got something.
COWEN: How many conspiracy theories do you believe in? Because I believe in close to zero, at least in the United States. They may be true for Pakistani military coups, but I think mostly they’re just false.
ALTMAN: Like true conspiracy theory, not just an unpopular belief?
COWEN: Correct.
ALTMAN: I have one of those X-Files shirts, like “I want to believe.” I still have one of those shirts from when I was in high school. I want to believe in conspiracy. I’m predisposed to believe in conspiracy theories, and I believe in either zero or very few.
COWEN: Yes, I’m the opposite of that. I don’t want to believe, and I believe in very few. Like maybe the White Sox fixed the World Series way back when, right?
ALTMAN: Yes, stuff like that, I don’t quite count that. Like a true massive global government cover-up, that requires a level of competence to people I just rarely ascribe.
COWEN: Now, some number of years ago, this was before even GPT-4, I asked you if you were directing a fund of money to revitalize St. Louis, which is where you grew up, how would you invest the money? Now it’s a quite different world from when I asked you last time. If I ask you again to revitalize St. Louis, how would you spend the money? Say it’s a billion dollars, which is not actually transformational, but it’s enough that it’s some real money.
ALTMAN: A billion dollars, and I’m willing to go spend personal time on it?
COWEN: You have free time. The universe grants you free time. You don’t take time away from anything else you’re doing. You’re in charge.
ALTMAN: This is not a deeply incisive answer because I think this is not a generally replicatable thing, but unique to me, what I could do. I think I would try to go start a Y Combinator-like thing in St. Louis and get a ton of startup founders focused on AI to move there and start a bunch of companies.
COWEN: That’s a pretty similar answer to last time.
ALTMAN: I didn’t remember what I said last time, so that’s a good sign.
COWEN: You said the same thing, but you didn’t mention AI. AI to me seems quite clustered where we are in the Bay Area. Is trying to get AI into St. Louis the right way to do that? Isn’t that, in a way, working at cross-purposes?
ALTMAN: This is why I said it’d be like a “unique to me” thing. I think I could do it. Maybe that’s hopelessly naive.
COWEN: Yes. Should it be legal to just release an AI agent into the wild, unowned, untraceable? Do we need some other AI agent to go out there and tackle it down, or is there minimum capitalization? How do you think about that problem?
ALTMAN: I think it’s a question of thresholds. I don’t think you’d advocate that most systems should have any oversight or regulation or legal questions or whatever, but if we have an agent that is capable with serious probability of massively self-replicating over the internet and sweeping all the money out of bank accounts or whatever, you would then say, “Okay, maybe that one needs some oversight.” I think it’s a question of where you draw the threshold for where it should not be.
COWEN: Say it’s hiring the cloud computing from a semi-rogue nation, so you can’t just turn it off. What actually should we do, or will we be able to do? Just try to ring-fence it somehow, identify it, surveil it, put sanctions on the country that’s sponsoring it?
ALTMAN: What do we do for people that do that today?
COWEN: Well, there are a lot of cyberattacks that come from North Korea, and I think we can’t do that much about them, right?
ALTMAN: I don’t know what the right answer is yet, but my naive take is we should try to solve this problem urgently for people using rogue internet resources, and AI will just be a worse version of that problem.
COWEN: It will have better defense, also.
ALTMAN: For sure.
COWEN: Yes. Now, if I think about social media and AI, here’s one thing I’ve noticed in my own work. I’m so, so keen to read the answers to my own queries to GPT-5, but when people send me the answers to their queries, I’m bored. I don’t blame them. I know it’s super useful for them, but that makes me a little skeptical about blending social media and AI. Am I missing something, or would you try to talk me out of that somehow?
ALTMAN: No, I’ve had the same observation. I don’t want to read your GPT-5 queries, either.
COWEN: Yes, but they’re great for me.
ALTMAN: I’m sure, and I’m sure you don’t want to read mine, but they’re great for me. ChatGPT, I think, is very much like a single-player experience. I don’t think that means there’s not some interesting, new kind of social product to build. In fact, I’m pretty sure there is, but I don’t think it’s the “Share your ChatGPT queries.”
COWEN: Videos or any sense of what that would look like?
ALTMAN: That one’s doing well. People, clearly, they love making their own, but they also like watching other people’s AI-generated videos. No, I think none of this stuff is the really interesting kind of things you can imagine when you and I and everybody else have really great personal AI agents that can do stuff on our behalf. There’s probably entirely new social dynamics to think about.
COWEN: Just the physical form of ChatGPT on my screen or on my smartphone, is that more or less going to stay the same, but the thing will be better, or 13 years from now, it will physically just be an entirely different beast? Because I can talk to it now. Now it does video. Is it just a better version, or somehow it morphs?
ALTMAN: We are going to try to make you a new kind of computer with a completely new kind of interface that is meant for AI, which I think wants something completely different than the computing paradigm we’ve been using for the last 50 years that we’re currently stuck in. AI is a crazy change to the possibility space. A lot of the basic assumptions of how you use a computer and the fact that you should even be having an operating system or opening a window, or sending a query at all, are now called into question.
I realize that the track record of people saying they’re going to invent a new kind of computer is very bad. If there’s one person that you should bet on to do it, I think Jony Ive is a credible, maybe the best bet you could take. We’ll see if it works. I’m very excited to try.
COWEN: Haven’t you already been surprised, how robust it is that people love typing text into boxes? This sort of shocks me in the bigger picture. People are still texting all the time. It’s one of the most robust forms of internet anything. Maybe that will just stick forever, and it’s a sign of our own limitations. How do we get past that?
ALTMAN: Texting, command lines, search queries, that’s my favorite interface.
COWEN: You like it, I like it. Maybe we’re just going to keep it.
ALTMAN: Well, a lot of people use it. People love to text. People like ChatGPT. I remember when we were thinking about the interface for ChatGPT, I was very set that this was something people would be familiar with and want to use. I think I grew up as a child of the internet with a lot of conviction that that was the right kind of internet. Texting was my life as a teenager.
COWEN: If you have some kind of ideal arrangement, partnership with an institution of higher education, say, within two to three years, what does that look like? You get to write the whole thing.
ALTMAN: I suspect that the whole model should change, but I don’t know what to. I think the ideal partnership would look like we try 20 different experiments. We see what leads to the best results. I’ve been watching these AI schools pop up with great interest. It seems like a lot of them with very different approaches are all showing positive results. I think the first few years of the ideal partnership would look like we run 20 wildly different experiments.
COWEN: Sometimes I have the fear that these institutions don’t have enough internal reputational strength or credibility to make any major change, forget about AI, and that to do a partnership with an institution like that is maybe intrinsically frustrating, and for the next 10 years, the actual model is privatized AI use on the side by faculty, by students, by labs, and in a sense there is no partnership other than actually just marketing your product to these people. Do you ever think that might be true?
ALTMAN: Yes, and it wouldn’t super upset me if that’s what happens.
COWEN: Yes. What do you think will happen to the returns to a college degree? Not Harvard, not Stanford, but like a quite good state school, five, 10 years out.
ALTMAN: What’s the historical rate of decline of the value of that?
COWEN: Recently, it’s gone down, though for a long time it was going up quite a bit.
ALTMAN: Yes, I mean the last decade.
COWEN: Oh, gone down. I don’t know how much.
ALTMAN: I would guess that it goes down at a slightly higher rate than the last decade, but it does not collapse to zero as fast as it should.
COWEN: Then it’s the returns to doing what other than learning AI that go up? Like being on the college football team or what?
ALTMAN: I don’t think the massive returns will accrue to doing AI for a small set of people, but I think the returns to using AI super well will be surprisingly widely distributed. I think the most important thing AI will do is discover new science for all of us, and a lot of people will benefit from that, and people will start companies or get jobs doing that. I’m not a believer that that is the only thing that eventually makes money.
I think people will just use AI for all sorts of new kinds of jobs or to do existing jobs better. Maybe the starkest example of this in 2025 is the day-to-day of how the average programmer in Silicon Valley did their workflow at the beginning of this year versus the end of this year. Extremely different. Extremely different. You don’t really have to know how to program AI to do that, but you can get more done, and you probably have much more value, and the world is going to get much more software. I think we’ll see things like that for a surprising number of industries.
COWEN: Say five years out, there’s a so-called normie person. They’re not a specialist. They want to learn how to use AI much better. What will they actually do that will give them a high return to acquiring that skill?
ALTMAN: To learn how to use AI specifically?
COWEN: Yes. Not program, not the inner guts, just actually in their job.
ALTMAN: I’m smiling because I remember when I was a kid and Google came out there, I had a job teaching older people how to use Google, and it felt like I just couldn’t wrap my head around it. It was like, “You type the thing in, and it does this.” A thing that I’m hopeful about for AI is that, I think one of the reasons ChatGPT has grown so fast, is it is so easy to learn how to use it and get real value out of it.
COWEN: We don’t need startups to do that?
ALTMAN: To teach people how to use AI?
COWEN: To teach people, yes. There is such a startup, or what’s the institution? My school will teach me, that’s hard to believe.
ALTMAN: 10% of the world will use ChatGPT this week didn’t exist three years ago. I suspect a year from now, maybe 30% of people will use ChatGPT that week. People, once they start using it, do find more and more sophisticated things to use it for. This is not a top-of-mind problem for me. I think I believe in human creativity and adoption of new things over some period of number of years.
COWEN: You might just want to support or invest in the startups that will do this because if you’re bullish on AI, presumably you’re bullish on those startups, and it will help your business in turn. It seems odd not to have a theory of how we’re all going to learn to use AI better. You can go to dog trainer school, and they teach you how to train a dog.
ALTMAN: Okay. Maybe I have a blind spot here, and I promise I’ll go think about this more. If you ask ChatGPT, “Teach me how to use you” —
COWEN: Yes, maybe that’s it.
ALTMAN: — it’s pretty good.
COWEN: Yes, so maybe you’re the school.
ALTMAN: Maybe.
COWEN: Yes. Let’s say when your kids are old enough that they’re grown, they can go out on their own. In that future world, which is not so far off, do you think you’ll still be reading books, or you’ll just be interacting with your AI?
ALTMAN: Books have survived a lot of other technological change. I think there is something deep about the format of a book that has persisted. It’s very mindy, whatever the current word for that is. I suspect that there will be a new way to interact with a cluster of ideas that is better than a book for most things. I don’t think books will be gone, but I would bet they’re a smaller percentage of how people learn a new or interact with a new idea.
COWEN: What’s the cultural habit you have that you think will change the most? Like, “Oh, I won’t watch movies anymore, I’ll create my own,” or whatever, for you? AI will obliterate what you did when you were 23.
ALTMAN: This is boring, but I think the way I work, where I’m doing emails and calls and meetings and writing documents and dealing with Slack, that I expect to change hugely, and that has become a real, and I have a cultural habit and a rhythm of my workday at this point. Spending time with my family, spending time in nature, eating food, my interactions with my friends, that’s stuff I expect to change, almost not at all, at least not for a very long time.
COWEN: Do you think San Francisco will remain the center for AI? Putting aside China issues, I just mean for the so-called West.
ALTMAN: Yes, I think that’s the default.
COWEN: It’s the default. You think the city is just absolutely making a comeback? It looks much nicer to me. It seems nicer. Am I deluded?
ALTMAN: I love the city. I love the whole Bay Area. I particularly love the city, but I love the Bay Area. I don’t think I’m a fair person to ask because I so want it to be making a comeback and to remain the place. I think so, I hope so, but very biased.
COWEN: AI will improve many things very quickly, but what’s the time horizon for it making rents or home prices cheaper? That seems like a tough one. Not the fault of AI, but land is land, and there’s a lot of legal restrictions.
ALTMAN: Yes, I was going to push back on the land is land. There are a lot of other problems that I don’t think AI can solve anytime soon. There could be these very strange second-order effects where home prices get much cheaper, but sadly, I don’t think AI has a direct attack on solving anytime soon.
COWEN: Food prices?
ALTMAN: I would bet down.
COWEN: In the short run, energy might be a bit more expensive. How long does it take for food prices to go down?
ALTMAN: If they’re not down in a decade, I’d be very disappointed.
COWEN: If we think of healthcare, my sense is we’re going to spend a lot more on healthcare. We’ll get a lot for it because there’ll be new inventions, but a lot of the world will feel more expensive because rent won’t be cheaper. Food, I’m not sure about. Healthcare, you’ll live to age 98, but you’ll have to spend a lot more. You’ll just be alive more while you’re spending, right? Are people just going to think of AI as this very expensive thing, or will it be thought of as a very cheap thing that makes life more affordable?
ALTMAN: I would bet we spend less on healthcare. I bet there are a lot of diseases that we can just cure or come up with a very cheap treatment for that right now. We have nothing but expensive chronic stuff that doesn’t even work that well. I would bet healthcare gets cheaper.
COWEN: Through pharmaceuticals, devices?
ALTMAN: Through pharmaceuticals and devices, and even delivery of actual healthcare services. Look, housing is the one to me that just looks super hard. There will be other categories of things that we want to get more expensive, and of course, those will — status goods or whatever. I would take the healthcare goes down a bit.
COWEN: With all the blizzard of new ideas coming, patent law, copyright law, those are based on earlier technologies and earlier models of how the world would work. Do we need to reexamine or change those radically for an AI-drenched world, or we can just keep what we have and modify it a bit?
ALTMAN: I really have no idea.
COWEN: I’m a big free speech advocate, but I can imagine the world saying, “Well, with all this AI-driven content, we need to reexamine the First Amendment.” Do you have a view on that?
ALTMAN: Without thinking much, I put out a tweet recently about how we’re going to be allowing more freedom of expression in ChatGPT.
COWEN: This is the famous erotica tweet. It’s funny what people get upset about.
ALTMAN: It is funny what animates people.
COWEN: Because all you’re saying is you’re not going to stop people, right?
ALTMAN: Well, we used to not a long time — well, that’s not totally fair. We’re going to allow more than we did in the past. A very important principle to me is that we treat our adult users like adults and that people have a very high degree of privacy with their AI, which we need legal change for, and also that people have very broad bounds of how they’re able to use it. To me, this should be one of the easiest things to agree on by most people in the tech industry or even most people in the US government.
I kind of dashed this tweet off and closed my computer, and it didn’t even hit my mind that it was going to be, really, a firestorm. It was that we made a decision, which I also think was a fair one, over the summer, that because there were new problems, and that particularly because we wanted to protect teenage users, we were going to heavily restrict ChatGPT, which is also always a very unpopular thing to do, and along with the rolling out of age-gating and some of these mental health mitigations, we were going to bring back and in some cases increase freedom of use for adults.
I was like, “Yes, I’ll tell people that’s coming because the first model update is shipping soon, but this should be a nonissue.” Boy, did I get that one wrong. I think maybe it’s just, people don’t believe in freedom of expression as much as they say they do.
COWEN: That was my opinion, yes.
ALTMAN: Everyone thinks, “Okay, my own freedom of expression, I can handle it. I need it. My ideas are all right, but yours — ”
COWEN: For greater privacy rights, is it subpoena power that needs to be changed, or something else in addition?
ALTMAN: Subpoena power. Well, let me just say, I believe that we should apply as much protection as when you talk to your doctor, your human doctor or your human lawyer, as you do when you talk to your AI doctor or AI lawyer.
COWEN: Right now, we don’t have that.
ALTMAN: Correct.
COWEN: Do you think there’s enough trust in America today for people to trust the AI companies the way we sort of trust doctors, lawyers, and therapists?
ALTMAN: By revealed preference, yes.
COWEN: Yes, by how many people talk to AI. LLM psychosis, everyone on Twitter today is saying it’s a thing. How much a thing is it?
ALTMAN: A very tiny thing, but not a zero thing, which is why we pissed off the whole user base, or most of the user base, by putting a bunch of restrictions in place. The “treat adult users like adults” includes an asterisk, which is “treat adults of sound mind like adults.” Society decides that we treat adults that are having a psychiatric crisis differently than other adults.
This is one of these things that you learn as you go, but when we saw, the kind of like put ChatGPT into role-playing mode, or pretend like it’s writing a book, and have it encourage someone in delusional thoughts, 99 point some big number percentage of adults, totally fine with that, some tiny percentage of people, also, if they talk to another person who encourages delusion, it’s bad.
We made a bunch of changes, which are in conflict with the freedom of expression policy. Now that we have those mental health mitigations in place, we’ll again allow some of that stuff in creative mode, role-playing mode, writing mode, whatever, of ChatGPT. The thing I worry about is not that there will be a few basis points of people that are close to losing grips with reality, and we can trigger a psychotic break, and we can get that right.
The thing I worry about more — it’s funny, the things that stick in your mind. Someone said to me once, “Never ever let yourself believe that propaganda doesn’t work on you. They just haven’t found the right thing for you yet.” Again, I have no doubt that we can’t address the clear cases of people near a psychotic break. For all of the talk about AI safety, I would divide most AI thinkers into these two camps of “Okay, it’s the bad guy uses AI to cause a lot of harm,” or it’s, “the AI itself is misaligned, wakes up, whatever, intentionally takes over the world.”
There’s this other category, third category, that gets very little talk, that I think is much scarier and more interesting, which is the AI models accidentally take over the world. It’s not that they’re going to induce psychosis in you, but if you have the whole world talking to this one model, it’s not with any intentionality, but just as it learns from the world in this continually coevolving process, it just subtly convinces you of something. No intention, it just does. It learned that somehow. That’s not as theatrical as chatbot psychosis, obviously, but I do think about that a lot.
COWEN: Maybe I’m not good enough, but as a professor, I find people pretty hard to persuade, actually. I worry about this less than many of my AI-related friends do.
ALTMAN: I hope you’re right.
COWEN: Yes. Last question, on matters where you can speak publicly. At the margin, if you could call in an expert to help you resolve a question in your mind of substance, what would that question be?
ALTMAN: I have an answer to this ready to go, but only because I got asked to — well, maybe I’ll tell the story right after. This is like I take this spiritually, not literally. There will come a moment where the superintelligence is built. It is safety-tested. It is ready to go. We’ll still be able to supervise it, but it’s going to do just vastly incredible things. It’s going to be self-improving. It’s going to launch the probes of the stars, whatever. You get the opportunity to type in the prompt before you say okay. The question is, what should you type in?
COWEN: Do you have a tentative answer now?
ALTMAN: No, I don’t. The reason that I had that ready to go in mind is, someone was going to see the Dalai Lama and said, “I’ll ask any question about AI you want.” I was like, “What a great opportunity.” I thought really hard about it and that was my question.
COWEN: Sam Altman, thank you very much.
ALTMAN: Thank you.
Photo Credit: Jeremi Rebecca