Jack Clark on AI's Uneven Impact (Ep. 242)

And the inevitable future of AI-enhanced parenting

Few understand both the promise and limitations of artificial general intelligence better than Jack Clark, co-founder of Anthropic. With a background in journalism and the humanities that sets him apart in Silicon Valley, Clark offers a refreshingly sober assessment of AI’s economic impact—predicting growth of 3-5% rather than the 20-30% touted by techno-optimists—based on his firsthand experience of repeatedly underestimating AI progress while still recognizing the physical world’s resistance to digital transformation.

In this conversation, Jack and Tyler explore which parts of the economy AGI will affect last, where AI will encounter the strongest legal obstacles, the prospect of AI teddy bears, what AI means for the economics of journalism, how competitive the LLM sector will become, why he’s relatively bearish on AI-fueled economic growth, how AI will change American cities, what we’ll do with abundant compute, how the law should handle autonomous AI agents, whether we’re entering the age of manager nerds, AI consciousness, when we’ll be able to speak directly to dolphins, AI and national sovereignty,  how the UK and Singapore might position themselves as AI hubs, what Clark hopes to learn next, and much more.

Listen to the full conversation

Recorded March 28th, 2025

Read the full transcript

Thanks to an anonymous listener for sponsoring this transcript.

TYLER COWEN: Hello everyone, and welcome back to Conversations with Tyler. Today, I am at Anthropic with Jack Clark. As you may know, there is now an Anthropic Economic Index which is measuring the effect of advanced AI on the US economy. There are two associated reports as of March 2025, and soon, more to come. Jack, of course, is the co-founder of Anthropic. Before that, he was the policy director at OpenAI, a reporter at Bloomberg, and originally, has his background in the humanities. He comes from Brighton, England. Jack, welcome.

JACK CLARK: Well, thanks for having me, Tyler. Pleasure to be here.

COWEN: Where is it in our economy that AGI will affect last in a significant manner?

CLARK: Ooh, I’d hazard a guess that it’s going to be things that are the trades and the most artisanal parts of them. You might think of trades as having things like electricians or plumbing, or also things like gardening. I think within those, you get certain high-status, high-skill parts, where people want to use a certain tradesman, not just because of their skill but because of their notoriety and sometimes an aesthetic quality. I think that my take might be gardening, actually.

COWEN: They won’t use AGI to help design the garden? Or just the human front will never disappear?

CLARK: I think the human front will never disappear. People will purchase certain things because of the taste of the person, even if that taste looks like certain types of modern art production, where the artist actually backs onto thousands of people that work for them, and they’re more orchestrating it.

COWEN: How about in the more desk-bound part of the service sector? Where will it come last?

CLARK: Come last? Ooh, good question. I think that on this, there are certain types of desk-bound work that just require talking to other people and getting to alignment or agreement. If you count certain types of sales —

COWEN: But it’s great at doing that already, right? It’s a wonderful therapist.

CLARK: It is, but we don’t send Claude to sell Claude, yet. We send people to sell Claude, even though Claude could probably generate the text to do the sales motion. People want to do commerce with other people, so I think that there’ll be certain relationships which get mediated by people, and people will have a strong preference, probably, for deals that they make on behalf of their larger pools of capital, where the deals are done by human proxies for large automated organizations or pools of capital.

COWEN: Where will AGI encounter the strongest legal obstacles?

CLARK: I think a few years ago, it was encountering quite strong obstacles in the law itself because lawyers tended to like being able to charge very high prices and had a dislike of things that could bid them down.

My less flippant answer is, probably big chunks of healthcare because it’s bound up in certain things around how we handle personal data, all of the standards around that. All of those standards are probably going to need to be changed in some form to make them amenable to being used by AI. We’ve had a really hard time updating or changing data standards in general.

COWEN: Once you can put the AI on your own hard drive, which will be pretty soon, won’t that all change?

CLARK: It will change in the form of gray market expertise, but not official expertise. I had a baby recently. Whenever my baby bonks their head, while I’m dialing the advice nurse, I talk to Claude just to reassure myself that the baby isn’t in trouble.

I don’t think we actually fully permit healthcare uses via our own terms of service. We don’t recommend it because we’re worried about all of the liability issues this contains, but I know through my revealed preference that I’m always going to want to use that, but I can’t take that Claude assessment and give it to Kaiser Permanente. I actually have to talk through a human to get everything else to happen on the back end to work out if they need to prescribe my child something.

COWEN: So, the number one job will be surreptitiously transmitting the generation of information that comes from AIs, in essence?

CLARK: Some of it may be that. Some of it is about laundering the information that comes from AIs into human systems that are not predisposed to that information going in directly.

On AI’s adoption curve in government

COWEN: I was thinking you might say the United States government, or some parts of it, would be where strong AI would come last. I think that would be my forecast. They still use software from the ’60s or maybe even the ’50s sometimes.

CLARK: They do, but I actually suspect that that could be a place where we see really rapid changes, actually, and for a couple of reasons. One, we know that AI has relevance to national security. It develops certain types of capabilities.

COWEN: Oh, yes, that will happen quickly —

CLARK: Yes, that’ll happen quickly.

COWEN: — but the rest of government.

CLARK: The rest of government.

COWEN: The Department of Education, HHS, HUD.

CLARK: The non-scary, sharp parts of government. I would wager that it will become surprisingly easy to get it into really hard parts of government, and then there will be a question of political will. All around the world — and I’m sure you experience this — governments desperately want growth, and they desperately want efficiency. We see that here today in —

COWEN: Do they?

CLARK: They say that.

COWEN: Oh, I agree there.

[laughter]

CLARK: But I think if you look at, also, things like voter polling and other things, people want to see more changes out of government than they’re currently getting. I think, sometimes, constituent preferences do ultimately change what their elected officials do. I would just take the other side of this, that government may move slightly faster than you think. It may be very large, established companies that end up having some of the greatest resistance to this in certain areas.

COWEN: Let’s say it was decided that half of the staff of HUD or Department of Education could be replaced by strong AI or AGI. Do we first need to hire more people to make that happen? Who is it we can hire and at what wage that switches the system so you can lay off the remaining half? I don’t understand how that’s ever going to work.

CLARK: I think that it will only work in a scenario where the system has got so powerful that you can bring the AI system in itself to help you think through this. Then there will be a question of political will, which is where this all might break down.

COWEN: What do you think is the chance that we decide to protect, say, half of the jobs in existence today with laws analogous to those we find in law and medicine against strong AGI?

CLARK: I think there is a high chance for a political movement to arrive which tries to freeze a load of human jobs in bureaucratic amber as a way to deal with the political issues posed by this incredibly powerful technology and how fast the changes are going to come from. I don’t think that we’ll do this in a reasoned way. I think it’ll be driven by the chaotic winds of political forces.

It feels like the sort of outcome that happens if we, as the companies building this stuff, and our customers don’t generate enough examples of what good transitions look like. The fewer bits of evidence you have there and the more evidence you have of larger economic changes, probably the higher chance there’ll be a desire to step in and protect workers in different domains, which comes from an impulse based around wanting to help people, but it might ultimately not be the most helpful thing over the course of decades to do that.

COWEN: But is it possible that’s actually, in a constrained sense, the very best outcome? People will still have a job. They’ll go somewhere in the morning. A lot of the work, especially the hard work, will be done by the AIs. Obviously, we’ll be richer, so we can afford this. All of a sudden, it won’t feel that bad to people. Life will look familiar. Isn’t that, in a sense, what we should be aiming for?

CLARK: I think we should be aiming for a —

COWEN: In the same sense, you might need, say, a generous welfare state to have free trade. The welfare state isn’t always the most efficient, but if people accept freer trade, it’s an okay bargain. Isn’t this, in a sense, the welfare state for the service workers? And they still get to go somewhere in the morning if they want to, but they don’t have to.

CLARK: I believe that all people, have a desire for meaning and salience to what they do on a day-to-day basis. My worry with what you describe — it might not feel like it has sufficient meaning. I think that there is some giant class of activity that we want to continue happening in the world that people do and from which they draw meaning, but I don’t know that the best way to get there is to take some class of work and say this is work that we’re protecting and from which meaning will spring. Because I’m not confident that you’ll pick a load of jobs which naturally create their own meaning in that sense.

COWEN: Hasn’t this succeeded in academia pre-AI? Most academic jobs are not that meaningful. The research people do — it’s not read by anyone. Maybe they’re decent teachers, but people take great pride in their research. They put a lot of effort into it. It’s meaningless. It seems we solved that problem already. We’re just going to take the academic model, but instead of it being research, bring it to, say, half of our current economy just to keep it going.

CLARK: Isn’t there a kind of angst and nihilism even within high-achieving parts of academia for this reason? Seems like when you speak to people, even very smart people who sometimes are doing the things you describe, they know they could be doing different things, and they are trapped into some kind of status game.

COWEN: There’s some of that, but even the Nobel laureates — they’re rivalrous with each other. They can be very bitchy, very petty, but that’s just human nature, right? If the Nobel laureates aren’t happy, there’s no post-AI world that’s going to do much better than how we’re doing for the Nobel laureates today, right?

CLARK: Maybe my pushback is, I think that all of this could happen sufficiently quickly that we might have the opportunity to just play different higher-status games that are afforded to us via AI and the productive capacity it unlocks. There are definitely going to be entirely new jobs that involve marshaling and fielding AI systems for all kinds of work. I think there’ll be —

COWEN: But those are hard jobs, right?

CLARK: Yes, but I think that there are going to be analogs which look more like creative, fun exercises in getting AIs to build things, or make things, or almost carry out competitions and games where people can play them with one another. And I think there’ll be entirely new forms of entertainment that has some amount of meaning, and perhaps an economic engine wired into it that people can participate in.

On AI teddy bears

COWEN: I believe we’re not that far from the age of what I call the AI teddy bears. You know what I mean when I say that?

CLARK: Yes.

COWEN: What percentage of parents now will buy those teddy bears for their kids and allow it?

CLARK: I’ve had this thought since I have a person, that is my child, that’s almost two.

COWEN: Sure.

CLARK: I am annoyed I can’t buy the teddy bear yet. I think most parents —

COWEN: You’re an outlier [laughs].

CLARK: No. I don’t know. I don’t know.

COWEN: You are cofounder of Anthropic, right?

CLARK: I don’t think I’m an outlier. I think that once your lovable child starts to speak and display endless curiosity and a need to be satiated, you first think, “How can I get them hanging out with other human children as quickly as possible?” So, we’re on the preschool list, all of that stuff.

I’ve had this thought, “Oh, I wish you could talk to your bunny occasionally so that the bunny would provide you some entertainment while I’m putting the dishes away, or making you dinner, or something.” Often, you just need another person to be there to help you wrangle the child and keep them interested. I think lots of parents would do this.

COWEN: Say that the kid says to you, “Daddy, I prefer the bunny to my friends. Can I stay at home today?” Do you take the bunny away? That’s the tough part, right?

CLARK: I think that’s the part where you have them spend more time with their friends, but you keep the bunny in their life because the bunny is just going to get smarter and be more around them as they grow up. If you take it away, they’ll probably do something really strange with smart AI friends in the future.

No, I don’t think I’m an outlier here. I think most parents, if they could acquire a well-meaning friend that could provide occasional entertainment to their child when their child is being very trying, they would probably do it [laughs].

COWEN: I feel the word “occasional” is doing a lot of work in that sentence.

CLARK: [laughs]

COWEN: If you can just ration how much your kid has the bunny, parents are going to love it, I agree. The same as with screens. A lot of children — they keep on wanting it. It’s hard to tell the child you can’t have it now. In the old days, it was watching television. “Oh, Mom, can I watch Star Trek again? Can I watch TV seven hours a day?” Well, that’s not good, and that’s hard to ration.

CLARK: Yes, so there’s some question here of how we portion this out. We do this today with TV, where if you’re traveling with us, like on a plane with us, or if you’re sick, you get to watch TV — the baby — and otherwise, you don’t, because from various perspectives, it seems like it’s not the most helpful thing. You’ll probably need to find a way to gate this. It could be, “When mom and dad are doing chores to help you, you get the thing. When they’re not doing chores, the thing goes away.”

COWEN: Do you end up with too much surveillance over your kid? You’ll know everything if you want to, right? That might be a reason why you give the kid the bunny all the more.

CLARK: I think it comes down to frequency, which I mentioned, and also what you are and aren’t allowed to know. I find surveillance puzzling. We have one of these smart webcams that we got when the kids were early, so that you just see that they’re asleep. If they’re waking up, you work out if they’re okay or not. The main thing that that meant is that when the baby cries at night, sometimes we go in less than if we didn’t have the camera because if we didn’t have the camera, we’d have to go and check.

As a consequence, my baby tends to sleep through the night a lot more because they don’t get occasionally interrupted. Sometimes they wake and cry for a minute and just go back to sleep. I think sometimes this stuff allows you to actually interfere less in a person’s life if you know certain things about it.

COWEN: Say you have the hypothesis that so many seven-year-olds — they talk to themselves. They say weird things, and the AI bunny is going to report back to you, “Your kid says — “

CLARK: If the bunny says your kid is saying strange stuff —

COWEN: But they all say strange stuff. I might have been chattering on about the New York Mets at age seven, and it would have been perfectly harmless, but it may not have sounded that way.

CLARK: You’re going to need to create spaces for unmonitored creativity in people. Actually, the same as how we have an approach to AI research today. Now, AI systems can output their chains of thought, which is the reasoning they use to come up with answers. We’ve had this question at Anthropic of how much should we monitor the chains of thought?

If you actually monitor them, you might create an anti-goal where the system ends up wanting to have chains of thought which are safe to be monitored and which don’t cause demerits, which might actually break how it thinks in a bunch of ways. I think this analog applies here, where you’re going to need to choose how much you actually decide to know about people, or you risk creating incentives that change their behavior such that they have a negative effect.

COWEN: You’ll go out on a date, and your date will say, “Please show me your AI report, what you’ve been talking to yourself about for the last three months.” You can not show it, which is a negative signal, or show it.

[laughter]

COWEN: We all learn what other people are really like, and we just grow to accept that?

CLARK: Yes. Although, if the person asked you to do that on the first date, it’s fine. If they ask on a second date, you probably shouldn’t be dating that person.

COWEN: Even during the swipe, the AI — you’re asked to upload it. The AI just reads the other AI report, and the person never sees it, and it tells you when you should swipe in the correct way.

CLARK: I don’t know. I don’t know if that’s so bad. I met my wife on OkCupid. I don’t know if you recall this. It was an online dating site where you would fill out a survey.

COWEN: Of course. Match.com, for me, is where I met my wife.

CLARK: Exactly. Then I met her. Our non-AI but automated system had said, “You guys might get along.” So, we met each other that way. I don’t know that this is so different to previous things we’ve used.

COWEN: But that’s information you’re putting in voluntarily as opposed to it watching you all the time.

CLARK: Yes. I just think that this is an area where we’re going to figure out the new norms of this technology and what seems appropriate and not appropriate. Some of that is just going to be solved through the logic of business, and other things through usage.

On the new economics of media

COWEN: What will the economics of media look like? If you can read a digest of everything that is probably better than the original, or certainly not worse, or more synthetic. Who gets paid for what?

CLARK: I think that this is one of the toughest questions in front of us. As you know, I’m a former journalist. I grew up as a professional during the period when online ad market changes were altering the business model of journalism. They were switching from what you might think of as value that you derive through quality, through subscriptions or people buying your stuff to value that you derive from just large-scale attention because that became a lot of the model for funding this stuff.

As a consequence, you saw the need to cross-subsidize journalists with journalists that got loads of eyeballs, and journalists like me, who would write about databases got less eyeballs. We were cross-subsidized by the people that wrote about Justin Bieber or what have you.

I think for media — it’s going to be really challenging to think through how the economics of this work. I think that it’ll change in a couple of ways. One, you might move to some of these larger-scale publishing house–style models for certain types of fictional universes which have subsidy and cross-subsidy within them.

COWEN: Where does the cross-subsidy come from? Bloomberg — we know how that works. If AGI is truly general, and it’s based on what’s out there already, it should be able to do better than media on all dimensions.

CLARK: I think some things, you want to have come from a person for reasons that are based on the fact that we’re people. I think people will preferentially select the media which is fronted by other people, even if they’re making it using other means. I also think you want a kernel of humanity in a load of this stuff. Then there’ll be another type of media which is maybe attached to these subscription models, which is just on-tap permutations of anything and everything. So, there might be two markets that emerge.

COWEN: Even the things we want to come from humans — say we want Ann Landers, the advice columnist, to come from an actual Ann Landers. The Ann Landers of the world — they’re using AIs, maybe surreptitiously, but the value of that is bid down because anyone can do that for the price of energy. So, we don’t really end up with this one part of the revenue-generating sector that can cross-subsidize the other parts. Just if intelligence is pretty cheap, that hits all of media.

CLARK: It hits all of media, and you will end up wanting to pay individuals. I think that happens even in the world we’re in today. People want to subsidize individual creators who may be using a whole bunch of this stuff.

COWEN: It’s like Substack world.

CLARK: It’s Substack Patreon world for a large chunk of people, some of which will be incredibly successful. Then, I think, there’s also going to be universe world for certain universes which are very large and rich, which are being extended by AI systems.

COWEN: You mean, like a Lord of the Rings universe.

CLARK: Yes, or Warhammer 40k, to show my nerd credentials.

COWEN: They’ll publish fictional news, right?

CLARK: Yes.

COWEN: Real news will come from Substack? Substack is not an obvious cross-subsidy. There might be within the Substack company.

CLARK: Real news — I genuinely don’t know. I think some real news comes from analysis of publicly available facts, and it being composed together in a way that shows you insights that didn’t exist. Semianalysis on Substack is a good example of this, a lot of public stuff leading to interesting conclusions. But news in the moment that has loads of context was subsidized by previous business models which mostly no longer work, and I genuinely don’t know what happens to it.

On the economics of LLM providers

COWEN: If we think about what we now call LLMs, five or ten years from now, that kind of AI — how concentrated or how competitive do you think the sector will be? I don’t know if number of firms is exactly the right measure, but how do you see it evolving? Right now, there’s at least six firms. Depending how you count China, there are some complications, but there’s a bunch of firms, a lot of competition.

CLARK: I would expect that there will be, for quite some time — many, many years — areas where there are slightly different specialisms on all of these things. They’ll be concentric circles that have got massively large in terms of the space that they cover, and they’ll have edges which are slightly differentiated. But in those edges will be where the frontier of certain types of human value stacks on top of AI value to lead to people choosing one or the other.

COWEN: Claude is more poetic, for instance.

CLARK: A bad example due to the economics of poetry aren’t particularly great. Coding might be a better example. Or there could be cases for things like certain types of scientific experimentation where, actually, taste might come to matter a lot for composing experiments.

I expect we enter a world where you have a single digit number of these very, very large-scale models which are servicing a much larger set of wrappers on top of them that change the form factor by which they get integrated into your life, and which have a load of helper functions or knowledge, probably built by those AIs in partnership with some kind of other specialized AI system, or expert knowledge from that domain to chain it in.

COWEN: Here’s a worry I have. My former colleague, Vernon Smith, once wrote a paper on this. He argued that, as you approach six competitors or more, a sector essentially behaves like perfect competition, even if it’s not exactly perfectly competitive. If that’s the case, how is it that AI, or indeed, any other companies, can behave ethically above and beyond what they need to do to stay in the market and earn profit? What room is there to be better than the next company?

CLARK: I think right now, it’s like we’re car companies, and we are mostly selling cars to teenagers who say, “How fast is it? And can I get it in red?” We’re beginning to find customers that are businesses that operate fleets of cars and say, “What seat belts does it have? What is your accident rate? What are all of these other properties?” The teenager has never asked us about, but these businesses do because it ties to their business model, or to liability, or to other things.

I think, therefore, there are technologies to be built that look like the safety equivalents that we have in cars or other things, which will change the logic of competition. We’re in competition right now. That competition will change as we unlock different markets that have different properties that they need to be present in the AI system. Those properties will often ladder up to certain types of safety technology.

COWEN: So, the price of insurance is carrying the mechanism, so to speak?

CLARK: I believe that this will be one of the ways that it helps, yes.

COWEN: How do we need to change or improve liability law for the price of insurance to actually carry that? Because right now, liabilities from AI of all sorts. It’s highly unclear.

CLARK: I have thought less about liability and more about information that should be made available by the companies, but I think these have an interplay where, today, there isn’t really a common labeling or disclosure standard about what are in these AI systems or the industrial practices you’ve used when making them.

I think that this is ultimately an area where you can do interventions to get some common level of transparency. That’ll interplay with both liability and also things like negligence, which will change the behavior of these companies over time. There’s some level of common information we can start providing about these systems, which will also change corporate behavior.

COWEN: The information part worries me, though. I’m more inclined to be an accelerationist and hope the benefits outrace the costs, which I definitely think they will. But information only picks up the private value, so the social cost to the decision you make. The information doesn’t change what you do very much. People know a lot about global warming. Some people eat less meat, but for the most part, they don’t, right?

CLARK: Well, one of the things that you do is, you create information which becomes truly common. The economic index work we’ve done is from one single firm, but ultimately, you might want to generalize that to all firms, but then link it to large-scale data gathering that governments might do. I think if that became the case, you would have a higher chance of tying it to actual policy responses that would be larger in scope.

I also think, on climate change, just by measuring things like parts per million — stuff like that has helped catalyze large-scale amounts of capital to get re-mobilized around the economy in different areas to deal with this perceived negative impact that that has.

COWEN: Do you agree with my view that we will not have meaningful international agreements on the hard parts of AI? Maybe on the simple parts.

CLARK: Ninety percent agreement. I think there is a chance of something that looks like a nonproliferation agreement between states, including the US and China in the limit.

COWEN: The US and China enforce it, and we become vaguely allied with them in this one crusade?

CLARK: It might be deciding that certain things, certain capabilities of AI systems might be so potentially destabilizing that you don’t want them broadly available while recognizing that each government may be privately developing them. But you might have some common standard for what’s available to everyone, where you’ve decided, ooh, this would lead to just all kinds of chaos in our countries as well as in other countries.

COWEN: And it would be enforced by the UN or by America? Or just international norms, sanctions?

CLARK: My assumption is a lot of this gets enforced by checking and then throwing policy punches at each other on things like tariffs or exports or other issues. I don’t know that there’re global governance bodies that will be formed. Some people feel optimistic about stuff like IAEA-style models for this, but I think things that require mutual inspection regimes end up being very, very hard to do under certain rivalrous dynamics.

COWEN: I worry we’re in a world where NAFTA has not stuck, and NAFTA is one of the easiest agreements.

CLARK: NAFTA should have been easy.

COWEN: It should have been easy, and it’s not, to say the least.

On AI-fueled economic growth

COWEN: Say 10 years out, what’s your best estimate of the economic growth rate in the United States?

CLARK: The economic growth rate now is on the order of 1 percent to 2 percent.

COWEN: There’s a chance at the moment, we’re entering a recession, but at average, 2.2 percent, so let’s say it’s 2.2.

CLARK: I think my bear case on all of this is 3 percent, and my bull case is something like 5 percent. I think that you probably hear higher numbers from lots of other people.

COWEN: 20 and 30, I hear all the time. To me, it’s absurd.

CLARK: The reason that my numbers are more conservative is, I think that we will enter into a world where there will be an incredibly fast-moving, high-growth part of the economy, but it is a relatively small part of the economy. It may be growing its share over time, but it’s growing from a small base. Then there are large parts of the economy, like healthcare or other things, which are naturally slow-moving, and may be slow in adoption of this.

I think that the things that would make me wrong are if AI systems could meaningfully unlock productive capacity in the physical world at a really surprisingly high compounding growth rate, automating and building factories and things like this.

Even then, I’m skeptical because every time the AI community has tried to cross the chasm from the digital world to the real world, they’ve run into 10,000 problems that they thought were paper cuts but, in sum, add up to you losing all the blood in your body. I think we’ve seen this with self-driving cars, where very, very promising growth rate, and then an incredibly grinding slow pace at getting it to scale.

I’m skeptical because every time the AI community has tried to cross the chasm from the digital world to the real world, they’ve run into 10,000 problems that they thought were paper cuts but, in sum, add up to you losing all the blood in your body. I think we’ve seen this with self-driving cars, where very, very promising growth rate, and then an incredibly grinding slow pace at getting it to scale.

I just read a paper two days ago about trying to train human-like hands on industrial robots. Using reinforcement learning doesn’t work. The best they had was a 60 percent success rate. If I have my baby, and I give her a robot butler that has a 60 percent accuracy rate at holding things, including the baby, I’m not buying the butler. Or my wife is incredibly unhappy that I bought it and makes me send it back.

As a community, we tend to underestimate that. I may be proved to be an unrealistic pessimist here. I think that’s what many of my colleagues would say, but I think we overestimate the ease with which we get into a physical world.

COWEN: As I said in print, my best estimate is, we get half a percentage point of growth a year. Five percent would be my upper bound. What’s your scenario where there’s no growth improvement? If it’s not yours, say there’s a smart person somewhere in Anthropic — you don’t agree with them, but what would they say?

CLARK: I think one is something that you touched on earlier, where we could get the politics of this really wrong, and the technology could just get put in a relatively small box that does some economic good somewhere, but at a very, very small constrained amount. That’s the nuclear-power-failure-mode story, where we do some kind of large-scale regulatory scheme. We make it hard to do this stuff, and it ceases to have much of an effect. Maybe it has an effect elsewhere.

The other case I’d give would be . . . It’s very hard for me to give a 0 percent case because we know today it’s incredibly useful for coding. If you just stopped all of it today, all further progress, I think the coding use case alone is going to be incredibly useful because it grows the ability to digitize parts of the economy, which we know drives faster loops in different parts of the economy, but generates value.

I think 0 percent is basically sub-1 percent chance of happening. If it did, it would be a Luddite thing, or something that looked like war with Taiwan, leading to everything else looking different as well.

COWEN: In the 5 percent scenario — put aside San Francisco, which is special — but do cities become more or less important? Clearly, this city might become more important. Say, Chicago, Atlanta, what happens?

CLARK: I think that dense agglomerations of humans have significant amounts of value. I would expect that a lot of the effects of AI are going to be, for a while, massively increasing the superstar effect in different industries. I don’t know if it’s all cities, but I think any city which has something like a specialism — like high-frequency trading in Chicago or certain types of finance in New York — will continue to see some dividend from sets of professionals that gather together in dense quantities to swap ideas.

COWEN: Could it just be easier to stay at home, and more fun? I find I’m an outlier, but my use of AI — I either want to go somewhere very distant and use the AI there to learn about, say, the birds of a region, or I want to stay at home. It’s a barbell effect. The idea of driving 35 minutes to Washington, DC — that seems less appealing than it used to be.

CLARK: Maybe I just have a different personality, or maybe it’s that I work in a really, really confusing domain, and I need to go and talk to other people who work in the confusing domain to get remotely oriented.

I also think that people are more . . . their revealed preference from stuff like COVID is that they have a greater desire for certain types of social things that they may have thought, and now they’re bouncing back to it, but it’s going to interplay slightly differently with cities.

COWEN: If so much of labor and capital is going to be revalued, is quality land the best investment? Because I don’t know which firms will benefit the most. You might bet on AI firms — sure, that’s kind of easy, though it’ll be priced in. But the other firms, who knows? So, buy land in Los Angeles, say.

CLARK: I think that electricity and the production of electricity is going to be of tremendous value.

COWEN: What’s the thing you would buy?

CLARK: I think you buy components that go into things like gas turbines and other things which are the base of the technology tree for generating electricity, of which you know people are going to want to do lots of. You could buy a basket of different components here. I would expect that to be of durable, meaningful value.

COWEN: Let’s say it’s 10 years from now. GPUs are their successor. They’re not that scarce. There’s a lot of idle capacity just sitting around, and let’s say it’s very cheap. What do you have it do in its spare time?

CLARK: Ooh, that’s fun. I think one of the things is you might just pay AI systems by giving them access to compute. You could imagine some kind of barter economy. Now, for a range of bone-chilling safety problems with what I just said, but let’s put that aside and assume that those are mostly dealt with. I think there’ll be some form of trade with powerful AI systems or agents that work on people’s behalf, and you may want to trade with them a form of compute because agents may be able to use compute for themselves as well. That’s some of it.

I think the other part is generating interesting permutations of stuff that you find valuable or interesting. I also suspect that there are going to be very weird things we can’t anticipate that look like . . . and a form of entertainment that looks like parallel history generation or parallel future generation. People are always fascinated by what ifs and rollouts of what ifs.

COWEN: Fan fiction, I think, will grow immensely.

CLARK: And fan fiction. Yes. I think large amounts of compute get used to generate alternate realities. But some of the alternate realities will be sliding-door versions of today, where it’s a very accurate portrayal, but with one difference.

COWEN: What would you do personally? I’m not saying the world, but just you, Jack Clark. You have all this free compute. Will you have it try to write another play by Shakespeare or something else? Or how Brighton might have developed differently from year 1830? What are you going to do with it?

CLARK: The thing which I currently do with Claude Code is, I’m trying to write a really detailed good paperclip factory simulator, partly because of the inherent comedy of it, but also because I find these very complex simulation games incredibly fun and engaging. I also think there’s a massive space in which you can create them, of which we’ve only created a tiny subset.

One of the things that we haven’t ever really done — because it’s computationally so unbelievably expensive — is actually put AIs as agents inside the games. Really early attempts are actually . . . Demis of DeepMind, before he did DeepMind, he did a game called Black & White, which had actual, very primitive reinforcement-learning–driven agents playing in the game. There’re tons of stuff you can do there, which would be amazingly interesting.

On governing AI agents

COWEN: Speaking of agents, how should the law deal with agents that are not owned? Maybe they’re generated in a way that’s anonymous, or maybe a philanthropist builds them and then disavows ownership or sends them to a country where, in essence, there’s not much law. I’m not talking about terrorism; that’s separate. But just someone sends an agent to Africa, and 98 percent of what it does helps people, but as with every charity, some things go wrong. There’re some problems. Can someone sue the agent? How is it capitalized? Does it have a legal identity?

CLARK: I will partially contradict myself where, earlier, I talked about maybe you’re going to be paying agents. I think that the pressure of the world is towards agents having some level of independence or trading ability.

From a policy standpoint, I’m reminded of that early thing that IBM said, which was, a computer cannot be accountable to a decision; only humans can. I think it got at something quite important where if you create agents that are wholly independent from people but are making decisions that affect people, you’ve introduced a really difficult problem for the policy and legal systems to deal with. So, I’m dodging your question because I don’t have an answer to it. I think it’s a big problem question.

COWEN: My guess is we should have law for the agents, and maybe the AIs write that law, and they have their own system. I worry that if you trace it all back to humans, someone could sue Anthropic 30 years from now. Oh, someone’s agent was an offshoot of one of your systems. It was mediated through Chinese Manus, but that, in turn, may have been built upon things that you did.

I don’t think you should be at all liable for that. I see liability getting out of control in so many cases. I want to choke it off and isolate it somewhat from the mainstream legal system. If need be, you require that an independent agent is either somewhat capitalized, or it gets hunted down and shut off.

CLARK: Yes. It might be that, along with what you said, having means to control and change for resources that agents use could be some of the path here because it’s the ultimate disincentive.

Although I will note that this involves pretty tricky questions of moral patienthood, where we’re working on some notions around how to get clearer on better anthropic. If you actually believe that these AI agents are moral patients, then turning them off introduces pretty significant ethical issues, potentially, so you need to reconcile these two things.

COWEN: I was, not too long ago, at an event with some highly prestigious people. This was in New York, of course, not San Francisco.

CLARK: Oh, it’s where prestigious people hang out.

COWEN: I used the phrase AGI, and not one of the five even knew what I meant. I don’t mean they were skeptical in the deep sense, which maybe one should be. They just literally didn’t know what I meant. What’s your model of why so many people are still in a fog?

CLARK: I am a technological pessimist who became an optimist through repeated beatings over the head of scale. What I mean by this is, I’ve consistently underestimated AI progress. Maybe I am today in this conversation when I talk about 3 percent to 5 percent growth rates. What has happened is, I have just endlessly seen the AI system get to where I thought it couldn’t, or thought would take a long time, much faster than I thought. So, I’ve had to internalize this repeatedly.

Nonetheless, we, ourselves, find it surprising. Last year here people were saying, “Oh, well soon Claude is going to be doing most of the coding at Anthropic.” We’re now on the way to that, where Claude Code and other things are writing tons of code here. It still felt surprising internally even though we have docs from last year predicting it would happen about now.

Most people outside of the AI labs have no experience of pre-registering their predictions about AI and getting it proved wrong to them repeatedly, because why would you do this unless you work here? I found that the only way to break through is to take their domain and show them what AI can do directly in the domain, which they can evaluate it within, which is an expensive process.

On manager nerds

COWEN: Silicon Valley up until now has been age of the nerds. Do you feel that time is over, and it’s now era of the — I don’t know — humanities majors —

CLARK: Yes.

COWEN: — or the charismatic people or what?

CLARK: It’s my time, Tyler, finally.

COWEN: Your time. [laughs]

CLARK: I think it’s actually going to be the era of the manager nerds now, where I think being able to manage fleets of AI agents and orchestrate them is going to make people incredibly powerful. I think we already see this today with start-ups that are emerging but have very small numbers of employees relative to what they used to have because they have lots of coding agents working for them.

COWEN: Like Midjourney, right?

I think it’s actually going to be the era of the manager nerds now, where I think being able to manage fleets of AI agents and orchestrate them is going to make people incredibly powerful. I think we already see this today with start-ups that are emerging but have very small numbers of employees relative to what they used to have because they have lots of coding agents working for them.

CLARK: Yes. Incredibly efficient start-up in terms of the people. We’re going to see this rise of the nerd-turned-manager who has their people, but their people are actually instances of AI agents doing large amounts of work for them.

COWEN: It’s still like the Bill Gates model or the Patrick Collison model.

CLARK: Yes.

COWEN: So, it’s not that different.

CLARK: People who have played lots of Factorio model, yes.

COWEN: Will the English major rise or fall in status?

CLARK: I suspect that humanities majors might rise in status while being given impossible problems. I’m a humanities major originally, and I’m being given problems like, solve the economic policy challenges implied by technologically driven unemployment.

COWEN: That’s easy, right?

CLARK: Easy, right? Or some people are saying, “Yes, what should we do about moral patienthood? And what should the policy situation be?” I think there’s going to be a recognition — and we see this today — that there are other skills needed, but the problems that those skills are needed for are often problems which have proved to be impossible for humans to adequately solve for thousands of years. So, there’s lots of work, I guess.

COWEN: What do you think will be the life expectancy of your child, accidents aside? Just normal life.

CLARK: I feel like 130 to 150, I wouldn’t find surprising, often good, because I feel like we are discovering that there’re tons and tons of stuff that we’re learning about the body and about gene therapies and other things, which seem like there’s a load of interventions you can do that might stack on one another to actually durably extend healthy lifespans for some period of time.

I was going to say 110, but I tried to be a technological optimist, so I added a couple of decades on top, which I’m assuming comes from magic AI-driven advances that I have underestimated in this response.

COWEN: I think the brain is very hard to fix. If you just wanted to keep someone alive literally, perhaps 130, but for them to be the same person, I think I’m stuck at 100 for most people. It’s just hard to replace the brain without killing someone. You can replace all the other organs, right?

CLARK: Yes. Here I subscribe to the idea that there’s a ton we don’t understand about the brain, and AI is going to help us understand loads more. There might be things that can be done here, which have eluded us because they are so unbelievably complicated, we needed AI-mediated tools to help us figure out the experiments and approaches.

On AI consciousness

COWEN: When Geoffrey Hinton says that, right now, the AIs are conscious, which I think is what he says, I believe he’s crazy. What do you think?

CLARK: I think that he’s —

COWEN: You can be more polite than I am. That’s fine. [laughs]

CLARK: Well, no. How I would phrase this is that I agonize about this. You read my newsletter. I write fictional stories in it often, which are me grappling with this question. I worry that we are going to be bystanders to what in the future will seem like a great crime, which is something about these things being determined to be conscious and us taking actions which you think are bad to have taken against conscious entities.

Internally, I say, there’s a difference between doing experiments on potatoes and on monkeys. I think we’re still in the potato regime, but I think that there is actually a clear line by which these things become monkeys and then beyond in terms of your moral relationship to them.

To Hinton’s point, I think that these things are conscious in the sense that a tongue without a brain is conscious. It takes actions in response to stimuli that are really, really, really complicated. In a moment, it has a sense impression of the world and is responding, but does it have a sense of self? I would wager, no, it doesn’t seem like it does.

These AI systems — we instantiate them, and they live in a kind of infinite now where they may perceive, and they may have some awareness within a context window, but there’s no memory or permanence. To me, it feels like they’re on a trajectory heading towards consciousness, and if they’re conscious today, it’s in a form that we would recognize as like a truly alien consciousness, not human consciousness.

COWEN: In what year do you think you’ll be able to speak directly to a dolphin, and it will talk back? With a translator, of course.

CLARK: I think 2030 or sooner. I think that one’s coming soon.

COWEN: What will you ask the dolphin? Because you’ll be early in the queue to ask.

CLARK: Yes. I think —

COWEN: What do you want to know?

CLARK: How do you have fun? Dolphins seem to have fun. They’re almost a famously fun-having species. I think you’d ask that. I think you’d ask them if they dream. I think you’d ask them if they had a notion of grief. I think you’d also ask them what is mysterious to them about their world, besides the fact you’re talking to them, which hopefully they’d find somewhat surprising.

I also think that you can talk to dogs as well, but I think those conversations —

COWEN: But you can do that now.

CLARK: — will be comedically unsurprising. You’re like, “What do you want to do?” He’ll say, “Walk.” I am like, “Okay, well we didn’t need a translator for this one.”

[laughter]

COWEN: What’s a book you think more and more about these days?

CLARK: There Is No Antimemetics Division by qntm, which is this book about a government agency that is dealing with antimemes, ideas that erase themselves from your memory after you’ve dealt with them but are themselves important. It’s about creating a bureaucracy that can handle ideas which are themselves dangerous and self-erasing because I think it gets at some of the problems that we’re experiencing.

The other book that I think about a lot and might be especially relevant to you is, a historian and economist called Fernand Braudel wrote a book called Capitalism and Material Life.

COWEN: Oh, I love those. Three volumes — they’re incredible.

CLARK: Yes. I read it in university, and I returned to it recently because it makes this point that you can look at how people’s lives change through just the things they had available, like cutlery or whatever, or basic tools. I think about AI through that lens. How is AI going to actually change my material life?

It comes to why I’m skeptical about some of the more ambitious forms of change is, the actual change in our day-to-day lives has been very, very, very slow even with these advances. But if you read those books, it seems like the most significant things stem from changes in everyone’s material day-to-day.

COWEN: They’re reissuing the antimimetic book, by the way.

CLARK: Yes?

COWEN: At a much higher price. But what would be an antimeme today? Or maybe you can’t remember them.

CLARK: A self-erasing idea?

COWEN: Yes.

CLARK: Well, what’s a good example? I think that maybe it’s more that the challenge of policy is, you are building a bureaucracy for an alien that arrives at some point in the future that you don’t know the shape of and which may, upon the moment of its arrival, be far smarter and more capable than you. So, you don’t know if this is just an insane exercise to do.

I oscillate between being a, “Hey, we should build all of these institutions so that we can measure stuff and see what happens and be able to take responses,” to a cyberpunk accelerationist where, actually, we just need companies to create benefits as quickly as possible, and then the system will figure out how to deal with that. I find this to be a crazy-making thing I’m trapped inside.

COWEN: I suspect we’re stuck with the latter, even if we don’t prefer it.

CLARK: No one loves that though. [laughs] A small number of people — some in San Francisco — love that, but if you say to politicians, “What we’ll do is, we’ll create really fast-moving companies but completely break conventions and change everything, and you’ll have to figure it out,” it’s a hard pill to swallow, and it may be what actually naturally happens.

COWEN: They say they don’t love it, and I believe they’re sincere, but they don’t take quick action to be on the other course. So, in that sense, they accept it.

CLARK: Yes, by revealed preference, it’s what they accept.

COWEN: But the system is not well geared to respond in any case, right?

CLARK: Yes, the system is a big, slow-moving cog, and we have a really fast-moving cog here, and we have none of the gears between them that translate the movement from one into the other.

On AI and national sovereignty

COWEN: What will happen to a lot of national governments? Let’s say I’m the government of Peru, and I turn my education system over to AIs, which are probably American. Then, well, my Social Security system, my national defense — just keep on adding to the stack, and all of a sudden, most of Peru is, run by — in a sense, it’s not run by — but American AI companies. What is the government of Peru in that scenario?

CLARK: What is the government of Peru in a world with Google and Facebook? Is it the same or?

COWEN: They don’t make better decisions in Peruvian government than the current Peruvian government. Peruvian bureaucrats make Google things. That’s just added on top to current Peruvian government, but for all their decisions or a lot of the big decisions. The kids are taught by AI. AI drones and detection systems, it’s all AI. Some of it’s LLMs. What’s the Peruvian part in there?

CLARK: We did some work on this called collective constitutional AI at Anthropic, where Claude has a constitution. Then we did this exercise of looking across America to find additional constitutional principles that might need to be included in it. We looked for ones of high agreement and also ones of low agreement.

I think governments will have some role of making sure that things reflect the normative preferences of their populations, which are going to continue to be varied, although perhaps AI means a certain homogenization takes place over time, but a lot of the role of government, I think, has always been to represent and encode the revealed values of a certain population. I think that work is not going away. You’re going to need to adapt these systems and inject certain values into them for the people that are using them and customers of them.

COWEN: A lot of that could be fig leaves, though. Maybe the AIs — they speak with Peruvian accents. There’re the Peruvian flags on all the smartphones or their successor devices.

CLARK: Then they goes to the AI break room and take it all off or something.

COWEN: Just act like AIs and do the right thing. To me, it seems that world actually is better. I’m not sure people want it.

CLARK: That’s the world we’re in now with certain forms of capital and capitalism, you’ll say stakeholder capitalism, and there’ll be a very small amount of stuff, like what you just described, and then the actual thing is just capital making decisions.

COWEN: Right. If Manus and DeepSeek are, to some extent, built upon American models, does that mean we’ve won the soft power war, that these models have a kind of soul — the soul of Claude, soul of GPT — and China, if it embraces them — I wouldn’t say surrendering, but the CCP [Chinese Communist Party], over time, loses control?

CLARK: I think the former might be true, and the latter is probably not true. The former is, the values of the most widely used systems will have some cultural export effect. Just like media today, where so much of Hollywood defined a cultural imprint from which many, many other forms of media stemmed, the same will be true of AI systems.

I don’t think that this is going to particularly change how well-positioned the CCP or other governments are because, just as they did with media, they recognize AI is a technology, but it also has a media technology property, and they will very carefully work on that. It’s notable that if you look at DeepSeek, the one form of safety training they did do was about adherence to certain parts of CCP ideology. It’s one of the only parts —

COWEN: It won’t talk about Taiwan in various ways.

CLARK: Yes, quite careful.

COWEN: But in terms of its soul, isn’t it still rather cosmopolitan, rather friendly, of good humor, and the CCP ends up imbibing that soul, and slowly but surely, a funny case for nonalignment is what you want. The CCP, no longer being the smartest entity in China, in essence, is turning over the keys to the shop.

CLARK: I think that’s a very positive vision, and I also think that it’s something that it would be obvious to everyone, including people in the CCP today. So, I imagine that there is going to be a pushback against what you described and an attempt to create very powerful AI systems with radically different values.

COWEN: I think there’s also a chance they just try to shut the whole thing down as they gave up on an AV much earlier in their history in China. Do you think there’ll be many countries that just refuse . . . AI is a loosely defined term, but you know, they just refuse AI and strong LLMs.

CLARK: I think there’ll be a very small minority of countries that do this. I think that over time, most countries have been integrated into this global capital system, partly through strong incentives, and they may have preferences that are different, but it hasn’t often held.

COWEN: It will be a much bigger change than just having smartphones, right?

CLARK: Some of this will appear as more minor than smartphones or as riders on top of the smartphone. I think that there might be countries that choose how fully to alter themselves or life around this technology. There will be some countries where you get it as an add-on, or it’s a tool you occasionally use. In other countries you’re much more immersed in it.

COWEN: How do you think about the aesthetics of the Anthropic office, where we are right now? It looks a certain way. Why?

CLARK: Some of it is that we inherited it, and it was slightly cheap. It wasn’t fully changed over. It partly reflects our kind of brand, which is oriented around trying to be tasteful and thoughtful, and also not too flashy. There’s a certain kind of cheerful blankness to it, which I quite like. I’m very sorry to any of our designers that may see this section of the podcast, but we have to keep it in the spirit of being epistemically honest.

I think that one thing we do, though, is, we chose to have these human illustrations of our systems, and that was an early intentional choice. I and others looked at some of the earlier designs, and there were designs that were very tech-focused and also brand identities which involved collage and other things, but we went for these human drawings because we are going to forever be trying to describe an increasingly complicated world that we are also creating.

I think in the limit, a lot of what Anthropic is actually going to be doing as a company is telling the story of what an increasingly automated AI Anthropic has done. So, some of our brand links through to what I think will ultimately be where we spend most of our time.

COWEN: For the ongoing AI revolution, what’s the worst age to be?

CLARK: Ooh.

COWEN: Say the best age is just to have been born if you’re going to live to maybe 130. If you’re very old and retired, it probably isn’t going to make you any worse off. It’ll help you in some ways.

CLARK: I feel like people who worked on AI for many years and are now in their 60s might be feeling very shortchanged because they’re like, “I love this technology. I really wanted to make this technology real, but technology is now becoming real, and I’m going to maybe miss some of the most interesting parts of it as it makes its way into the world.” I think that could be galling.

COWEN: But they can retire, right? Say I’m 40, and I did something upper middle-class but fairly routine. It feels a bit old to easily retrain. My guess is people who are 40 are the worst off in relative terms, even though they might live longer.

CLARK: You could be right. I also think that there’s some chance that the worst age to be is maybe 10 or so, because you are now computer literate, you’re sophisticated. You’re going into an education system that is needing to react to this technology. You will be using the technology in a way completely different to your education system, and it might just feel violently confusing. I see that being a really difficult time.

COWEN: Let’s say you’re flown into Washington — put aside any idiosyncrasies of the current administration — and you’re asked for advice. What do we do with these government agencies to get them ready for AGI? What do you tell them?

CLARK: One of the things is just try to deploy AI right now and discover all of the things that makes it incredibly difficult and then walk through those things.

COWEN: What does “deploy AI” mean? People are reluctant to send legal queries to Anthropic, to OpenAI. They’re not going to do that just yet, even if it might make sense. What should they actually do now?

CLARK: Concretely, it’s you work back from, what does it take to get it on every single computer of every single person? Then you discover some of the concerns. Some of the concerns might be like, where does the data go, or other things. Then you work out if that’s a problem you care about or not. If you don’t care about it, you ignore the problem and consciously do.

If you do care about it, you turn that into a very small number of prescriptions that the companies need to follow, but you need to attach that, probably, to a market signal. Or, you do a project of trying to take some of the open weights models, some of which are very good, and just get them on computers. Then once they’re on computers, you can work out, “Hey, maybe we want to buy this. What are the requirements?”

I think you start from the most ambitious possible goal, which is it being available to every single person, and you work back from that.

COWEN: Is the UK going to become an important AI hub?

CLARK: I hope so.

COWEN: But will they? And what should they do?

CLARK: I think they have a chance of it. We have a memorandum of understanding with the UK government, which we signed, which is about trying to see if we can find ways this can be helpful. I think the onus is really on governments to figure out where they’re willing to take big swings. The UK has digitized huge amounts of its data. It also has gov.uk, which is, similar almost to Estonia, a highly digitized front end to many different UK government services. That’s the kind of thing where surely AI can be used to make a difference, so try and use.

Also, if you have a load of post-industrial areas where you provision loads of power that is no longer being used, try to see if you can allocate that for compute. I think thinking about the economics of this is important. I feel like the economics of inference are going to become important. There might be something we want to think about on taxing inference differently according to where it happens, but it could present an opportunity as well.

COWEN: Will it be cost-effective for Singapore to build its own quality AI systems? Because they have money, but they’re small, right?

CLARK: Yes. It’ll be cost-effective for Singapore to build things that make large-scale AI systems built outside of Singapore work in a Singaporean context. I think it’s going to be very difficult for them to build, say, a foundation model that is going to cater to their needs better than the ones that are offered by the large companies in the US and perhaps China today.

COWEN: So, they’ll take open source but make it Singaporean?

CLARK: They will also probably do that as well.

COWEN: Or the major companies will somehow allow governments to custom-design systems.

CLARK: Yes. I think today you have fine-tuning services which are offered. You could imagine fine-tuning AI systems into a sovereign AI system, which then has some governance arrangement attached to it, but Singapore has. That doesn’t seem out of the realm of possibility.

COWEN: Very last question. What is it you hope to learn about next?

CLARK: There are two things that I’m spending a lot of time on. One is just theory of mind. How do we actually figure out how things are thinking, and also maybe ways to test for theory of mind. I’m reading a lot of the literature on that. The other thing, which is a lot more mundane, is, I’m learning to juggle. I go on long walks, and I’m trying to juggle while going on these walks, which, I think, looks like completely crazy-person behavior, but I find it very centering.

COWEN: How good are you at juggling?

CLARK: Very bad right now, but that’s good. It gives me something that I can derive meaning from as we head into an increasingly AI-driven age.

COWEN: Jack Clark, thank you very much.

CLARK: Thanks very much.