William MacAskill on Effective Altruism, Moral Progress, and Cultural Innovation (Ep. 156)

If moral philosophy is a train to crazy town, at what stop should we disembark?

When Tyler is reviewing grants for Emergent Ventures, he is struck by how the ideas of effective altruism have so clearly influenced many of the smartest applicants, particularly the younger ones. And William MacAskill, whom Tyler considers one of the world’s most influential philosophers, is a leading light of the community.

William joined Tyler to discuss why the movement has gained so much traction and more, including his favorite inefficient charity, what form of utilitarianism should apply to the care of animals, the limits of expected value, whether effective altruists should be anti-abortion, whether he would side with aliens over humans, whether he should give up having kids, why donating to a university isn’t so bad, whether we are living in “hingey” times, why buildering is overrated, the sociology of the effective altruism movement, why cultural innovation matters, and whether starting a new university might be next on his slate.

Watch the full conversation

Recorded July 7th, 2022

Read the full transcript

TYLER COWEN: Hello, everyone. Welcome back to Conversations with Tyler. Today I’m here with Will MacAskill, who is one of the most important and influential philosophers, period. August 16 is the publication date of his new and excellent book, What We Owe the Future. Will, most of all, is known for being a leader, perhaps the intellectual leader, of the effective altruism movement. Will, welcome.

WILLIAM MACASKILL: Thanks so much for having me on.

COWEN: Of all the inefficient things, which is the one you love most?

MACASKILL: Of all the inefficient things? There are some amazing examples of inefficient charities that I love, in the sense that they give me pleasure to think about. One of my favorites is a charity called ScotsCare, which was set up in the early 17th century. It’s called the Charity for Scots in London, and it’s dedicated to help Scottish people in poverty in London.

You might think that’s a pretty strange aim, but it made more sense when the charity was set up in the early 17th century. There was just recently the personal union of England and Scotland. There was migration from Scotland to London, so Scottish people were poor immigrants. It made sense for it to be a charity. Four hundred years on, perhaps it doesn’t make quite as much sense.

I like it partly — especially given that London is pretty affluent, especially compared to many areas of Scotland — I like it as an example of what gets called the “dead hand problem” in philanthropy, where the founding of nonprofits can have a very specific mission, and that mission can become increasingly absurd over time. I think it’s a good and heartening lesson for people who are trying to do good, looking into the future, too. You want to have aims that can be sufficiently flexible that, as the environment or things change, they still keep making sense.

COWEN: Does liking the inefficient mean that you are a pluralist and not a utilitarian?

MACASKILL: Well, I think I’m neither a pluralist nor a utilitarian. A pluralist is someone who thinks that there are many sources of moral reasons. A utilitarian thinks there’s only one. I say I’m not a utilitarian because — though it’s the view I’m most inclined to argue for in seminar rooms because I think it’s most underappreciated by the academy — I think we should have some degree of belief in a variety of moral views and take a compromise between them.

COWEN: But that is pluralism, right?

MACASKILL: No. Pluralism would be saying, “There is one true moral view, and that view says there are multiple competing sources of reasons.” Whereas —

COWEN: The true moral view can be a probabilistic assemblage of the different things you think might matter. If you go just a little bit more meta with pluralism, it encompasses what you think.

MACASKILL: Yes, that is my view, and it ends up looking very similar to what is known as pluralism because I end up paying attention to different sorts of moral reasons. But I think that the actual moral truth might be quite simple, whereas the pluralist, in terms of first-order moral theorizing rather than the meta theorizing that I’m proposing, says that moral reality might be really very complex or messy.

COWEN: If we’re assessing the well-being of nonhuman animals, should we use preference utilitarianism or hedonistic utilitarianism? Because it will make a big difference. We’re not sure all these animals are happy. They may live lives of terror, but we’re pretty sure they want to stay alive.

MACASKILL: It makes a huge difference. I think the arguments for hedonism as a theory of well-being, where that saying that well-being consists only in conscious experiences — positive ones contribute positively, negative conscious experiences contribute negatively — I think the arguments for that as a theory of well-being and the theory of what’s good are very strong. It does mean that when you look to the lives of animals in the wild, my view is it’s just very nonobvious whether those lives are good or not.

That’s me being a little bit more optimistic than other people that have looked into this, but the optimism is mainly drawing from just lack of — I think we know very little about the conscious lives of fish, let alone invertebrates. But yes, if you have a preference satisfaction view, then I think the world looks a lot better because beings, in general, want to keep living.

Actually, when we look to the future as well, I think if you assess how good is the future going to be on a hedonist view, well, maybe it’s quite fragile. You could imagine lots of future ways that civilization could go, where they just don’t care about consciousness at all, or perhaps the beings that will, are not conscious. But probably, beings in the future will have preferences, and those preferences will be being satisfied. So, in general, moral reality looks a lot more rosy, I think, if you’re a preference satisfactionist.

COWEN: But it’s possible, say, in your view, that human beings should spend a lot of their time and resources going around destroying nature, since it might have negative net expected utility value.

MACASKILL: I think it’s a possible implication. I think it’d be very unlikely to be the best thing we could be doing because once —

COWEN: But there’s a lot of nature. We have very effective bombs, weapons. We could develop animal-killing weapons if we set our minds to it.

MACASKILL: That’s right. There’s a lot of nature, but there’s far more future. If we’re willing to take philosophical reasoning far enough, where we’d be seriously considering removing nature, then you should be taking much more seriously the fact that we can have this enormous impact over our long-run future. I will caveat and say I really don’t know whether animals in the wild have lives that are good or bad.

When you look at the world as a whole, it gets incredibly determined by, where’s the cut-off for conscious experiences? Because is it, are ants conscious? They have an awful lot of the total neuron count of animals in the wild. If they are, do they have lives that are good or not? I’m like, “I have no idea.” [laughs]

COWEN: I worry a bit this is verging into the absurd, and I’m aware that word is a bit question-begging. But if we think about the individual level — like what do you, Will, value? — you value, in part, the inefficient. It’s very hard to give people just pure utilitarian advice, because they’re necessarily partial.

At the big macro level — like the whole world of nature versus humans, ethics of the infinite, and so on — it also seems to me utilitarianism doesn’t perform that well. The utilitarian part of our calculations — isn’t that only a mid-scale theory? You can ask, does rent control work? Are tariffs good? Utilitarianism is fine there, but otherwise, it just doesn’t make sense.

MACASKILL: Okay. There is what we might call the train to crazy town. We have all these starting moral intuitions. What I see as the project of moral philosophy is reconciling them, using theory and careful reasoning to make moral progress, which often involves creating simpler and explanatorily powerful theories that move away from your commonsense intuitions. Then the question is, how far are we willing to move? Very difficult methodological question.

You brought up infinite ethics. That is something where I certainly, in practice, do not bite that bullet or follow that implication — where, for the listeners, the argument is, “Okay, the utilitarian wants to maximize the good,” understanding that’s total well-being. Now, perhaps in my book, I argue there’s enormous amounts of value at stake when we consider the long term, so we should reduce the risk of extinction and promote good values so that we make the most of that, make the most of all value in the long term.

However, someone could expand the pie and say, “Well, that’s just piddling finite amounts of value. What about the possibility of creating infinite amounts of value? Because religious traditions say that one can create infinite amounts of value, that heaven is infinitely good. And you’re a good Bayesian. You don’t have credence zero in the idea of there being a God that could produce infinite amounts of value.”

I would say, “No, I don’t.” If so, well, tiny — even if I put it one in a trillion that there’s such a God, multiply one in a trillion by infinite positive value, then that overall expectation is infinitely great, and that’s what we should be focusing on. I will acknowledge, I get off the train to crazy town before I’m at that point. There is —

COWEN: But why not get off the train a bit earlier and just say, well, the utilitarian part of our calculations — it’s embedded within a particular social context, like, how do we arrange certain affairs of society? But if you try to shrink it down to too small — how should you live your life — or to too large — how do we deal with infinite ethics on all of nature — that it just doesn’t work. It has to stay embedded in this context. Universal domain as an assumption doesn’t really work anywhere, so why should it work for the utilitarian part of our ethics? Get off the train at stop 2, not stop 17.

MACASKILL: [laughs] Stop 17. I agree, there’s a hard choice there, and certainly, as someone who takes action in the real world as well, it’s very notable to me how much you end up just infusing your action with commonsense moral reasoning. It’s always unclear — is that on sophisticated consequentialist grounds? Or is it just that one is acting pluralistically? I think you should take it on a case-by-case basis.

I think that, actually, the issue of wild-animal suffering sounds completely wild when you first — no pun intended — completely wild when you first hear it. But I think it’s not that many steps away from commonsense moral reasoning. I don’t have a pet; my friends have pets. They care greatly about the lives of their pets. That’s in their well-being. That’s just a very standard, commonsense moral view.

Then next — does that well-being of your pet change? Does the moral worth of a creature change whether it happens to be your pet or is born in the wild? I think there’s a good argument for thinking “no.” Then the question of, “Well, if you think it’s good to invest in resources to improve the well-being of your pets, then yes, maybe it’s good to invest resources in improving the well-being of animals in the wild.”

Then, I think the reaction that people have, which is like, “This is just so crazy” — partly, it’s not really thinking about it, but partly also, it’s just worries about interfering with nature having negative backfiring consequences, and I think those arguments are just good. Maybe you think, “Oh, predation is bad, so we’re going to stop predators.” But then that leads to other, worse consequences.

I think it is true that you’re dealing with an environment that we don’t fully understand. From the wild-animal–suffering perspective, it may be very pro more research, more thinking about this. I’d be pretty wary of just paving over the jungle because, on the basis of our very nonrobust evaluation, we think that animal lives are, on average, negative.

COWEN: Let me ask you the question I asked Sam Bankman-Fried. Let’s say we take the known world of living beings, however large that may be, and the demon offers us a bet. We can double that world with probability 51 percent, but with 49 percent, it all goes away and disappears, and everything’s gone. Now, in expected value terms, that’s a good bet, right? Should we do that? Sam said yes. He’s like, “I’m going to bite that bullet. I want to bite this bullet,” he said. What’s your view?

MACASKILL: One first thing is, we’ve got to carefully state the question. If you’re just giving me a doubling of the world as it is, well, I think, again, almost all value is in the future. It’s to come. Instead, the thought experiment should perhaps be —

COWEN: Well, that doubles too, right? The future is going to double, everything.

MACASKILL: Okay, good.

COWEN: Spell it out all carefully, but it’s a double-or-nothing bet at 51 percent odds.

MACASKILL: Yes.

COWEN: I say, no way should you do it.

MACASKILL: Yes. I also admit, intuitively, I have very rapid diminishing returns to value. So, intuitively, I think that you take a galaxy, and it’s full of bliss, best possible galaxy — 50/50 for that versus all accessible galaxies are flourishing, and so on. There’s 20 billion of them, and I’m like, “Nope, don’t want to take that bet.”

I also think that there are issues for expected value theory, in general. It comes in, in particular, with low probabilities of large amounts of value.

COWEN: Sure. Pascal’s wager.

MACASKILL: Pascal’s wager.

COWEN: St. Petersburg paradox.

MACASKILL: Exactly, yes. We’re getting into all sorts of messes there. Then, in this case, it’s not an example of very low probabilities, very large amounts of value. Then your view would have to argue that, “Well, the future, as it is, is like close to the upper bound of value,” in order to make sense of the idea that you shouldn’t flip 50/50. I think, actually, that position would be pretty hard to defend, is my guess. My thought is that, probably, within a situation where any view you say ends up having pretty bad, implausible consequences.

COWEN: Your response sounds very ad hoc to me. Why not just say, in matters of the very large, utilitarian kinds of moral reasoning just don’t apply. They’re always embedded in some degree of partiality. The 51 percent–49 percent bet is not great for our partiality toward ourselves, and we just can’t go there.

It’s not that there’s some other theory that’s going to tie up all the conundrums in a nice bundle, but simply that there are limits to moral reasoning, and we cannot fully transcend the notion of being partial because moral reasoning is embedded in that context of being partial about some things.

MACASKILL: I think we should be more ambitious than that with our moral reasoning. I think, if we did moral reasoning many times in the past that we’re simply saying, “Look, there are many of these different considerations. It’s all pluralist at some point. Even though I can’t give it a good argument, the utilitarian-esque reasoning seems so compelling when you’re talking about saving one life versus ten.” And we think, “Oh, clearly the ten is more important, including 50/50 chance of saving ten lives versus one.”

It’s like, okay, you should go with the math. Over time that would save more lives. Then you say, “Oh, no, at some scale that sort of reasoning breaks.” That’s what seems ad hoc to me. If you’re saying, “Oh, well, these arguments are pushing you in a certain direction,” but then at some scale — what exactly is the scale? Is it a thousand lives? A million lives? A billion lives? It seems like nothing qualitatively different has happened.

Whereas the thing that I want to say is a qualitative difference, something to do with when we’re juggling probability against value. That’s where maybe the pure just multiplication . . . Or there’s something going wrong with expected value theory, and I can constrain the issues to that. Whereas if I’m just saying, in general, when the scales get big, develop utilitarian-esque reasoning — that seems unmotivated to me.

COWEN: I don’t think it’s just probabilistic questions. You’re very familiar with the repugnant conclusion.

MACASKILL: Yes.

COWEN: You know we haven’t solved it. There’s nothing probabilistic there. It just seems to be another case where, when you stretch the limits far enough, nothing works, and that you have Pascal’s wager, the 51/49 gamble, the repugnant conclusion, many other paradoxes in moral philosophy — they all seem to kick in. In my view, that’s not an accident. There’s no reason to, ad hoc, try to address every one. We just need to downgrade where we think a certain kind of consequentialist reasoning could apply.

MACASKILL: Okay. I think these paradoxes show something much more thoroughgoing than an issue for consequentialism.

Also, just briefly on the 51/49: Because of the pluralism that I talked about — although, again, it’s meta pluralism — of putting weight on many different model views, I would at least need the probabilities to be quite a bit wider in order to take the gamble because, again, the kind of best problem was —

COWEN: I can give you 90/10.

MACASKILL: You can give me 90/10.

COWEN: I’ll give you 90/10, but we play it 200 times, right?

MACASKILL: Yes, exactly.

COWEN: We’re still in a lot of trouble.

MACASKILL: Yes, I was wanting to clarify that for the listeners. I didn’t say it earlier because I was quite aware that you could pull me back with just, “Okay, give me some probability or something.” Then I think it starts to get more defensible.

Anyway, so there are these paradoxes, so take the paradoxes of population ethics. Again, for the listeners, any view that you have in population ethics has extremely unintuitive implications. That’s actually been formally proven.

The repugnant conclusion is the idea that a world consisting of a very, very, very, very large number of beings, all with lives just barely above zero, just barely worth living — because it has more aggregate well-being, is better than 10 trillion lives of wonderful bliss. It turns out, actually, that’s like, in my view, the least bad of the bullets that you have to bite within population ethics. Sometimes this is taken as a problem for consequentialism. That’s what you’re suggesting.

But every moral view has to have a view on population ethics. Every moral view has to decide under what conditions should we think it’s a good thing to bring a new flourishing life into existence, not just consequentialist moral views. The view you’d have to be promoting is something much more thoroughgoing, which is just that there are limits to moral reasoning, perhaps that we should be okay with just inconsistent moral views.

I’m not quite sure exactly what your view would be there, but it’s not just that we have to throw consequentialism out of the way. It’s like we actually have to throw moral consistency out the window, or something.

COWEN: Should the EA [effective altruism] movement be anti-abortion?

MACASKILL: I don’t think so.

COWEN: Why not? If you look at hedonistic utility, if you have more people who are not at repugnant conclusion margins, you’d have somewhat more people, not that many more, right?

MACASKILL: I think a few things. The first is, if you think that it’s good to have more happy, flourishing people — and I think if people have sufficiently good lives, then that’s true. I argue for that in What We Owe the Future. Then, by far, by overwhelming amounts, the focus should be on how many people might exist in the future rather than now, where perhaps you have a really good fertility program, and you can increase the world population by 10 percent. That’s like an extra billion people or so.

But the loss of future life and future very good life if we go extinct — that’s being measured in the trillions upon trillions of lives. The question of just how many people should be alive today is really driven by, how would that impact the long-term flourishing of humanity?

That being said, like all things considered, I think there’s this norm, there’s this idea at the moment that it’s bad to have kids because of the carbon footprint. I think that only looks at one side of the ledger. Yes, people emit carbon dioxide, and that has negative effects, but they also do a lot of good things. They innovate, and there’s an intrinsic benefit too. They have happy lives. Well, if you can bring up people to live good lives, then they will flourish, and that’s making the world better. They also might be moral changemakers, and so on.

But then, the question — even suppose you think, “Okay, yes, larger family sizes are good,” the best way of achieving that — it would seem very unlikely, to me, that banning abortion, or more like very heavily restricting women’s reproductive rights, is the best way of going about that.

COWEN: It doesn’t have to be the best way of achieving that. It might be this 37th-best way, but if it were still positive expected utility value — at least in your framework, you’re fine with subsidizing births, right?

MACASKILL: That seems like that.

COWEN: So, taxing nonbirth just seems to be the opposite of that. It’s like the dual.

MACASKILL: Again, you’ve got to fully take into account different moral perspectives, where, in the same way, I think it’s good for people to donate to charity. I think that makes the world a better place. Having that view is a far cry from saying, therefore, we should go and lock people up who don’t donate to charity. That could easily be very bad, counterproductive. I think that, probably, very similar could be said about early-stage abortion, for example.

COWEN: If there are smart sentient space aliens out there, say, in pretty large numbers, should we then worry much less about existential risk on earth? Someone will continue the tradition. Maybe they don’t love Beethoven, but eh, 400 years from now maybe people won’t anyway.

MACASKILL: It’s a great question. Among people I know, views are divided on, “Should you think that a human-originating future is going to be better than an alien-originating civilization?” My honest view is more like it’s a toss-up. I don’t see a particular reason for thinking that a civilization that comes from human beings is going to be much greater in value than from aliens.

Whether this undermines existential risk, though, is dependent very crucially on whether we actually expect aliens to come and build a flourishing civilization. I think the best guess from the Fermi paradox — that is the paradox that we don’t, in fact, see advanced intelligent life — is that, probably, we’re just alone, at least in a very, very large section of space. So, as an empirical fact, I think it’s really quite likely that if humanity dies off, no one else will take our place to build some flourishing civilization.

COWEN: Just contingent on the space aliens being out there, if someone asked me to bet, “Well, which side in the war would Will MacAskill fight on?” I would bet 1,000 to 1 you’d fight for the humans. But in your moral theory, the humans being better than the aliens — it’s a toss-up.

This notion that you can’t actually escape some preexisting degree of partiality in the normative framework seems to resurface. I think you want to have it both ways, unless you feel my bet on you to fight for the humans is wrong. Is there really a 50 percent chance you’ll fight for the aliens?

MACASKILL: Here’s the argument. I have two moral perspectives that I’m putting some weight on. One says aliens have as good a chance of producing a great civilization as humans do. The second is the more partial view which would weight humans above aliens. If I’m putting weight on both of them, which again I think we should — I don’t think you should be super confident in any one model worldview — then that will favor, to some extent, the humans.

I think it would be a mistake to favor the humans by 10,000 to 1, supposing you could do some very risky thing that could wipe out both. It’s a 50/50 chance of wiping out both aliens and humans, but 50 percent chance of saving the humans, and that increases your odds of humanity surviving. Then I’m like, “No, don’t do that thing.” But would I give some extra weight to human-originating civilization, all things considered? Then, yes.

COWEN: Now, you’re super influential. I’d say you’re one of the five most influential philosophers in the world, which is great. Does that mean you should personally give up having children?

MACASKILL: Wow. What a great question.

COWEN: I want you to, to be clear, but I’m asking what you think.

MACASKILL: Obviously something I’ve thought deeply about, and I do want to say that people, in general, should make their own reproductive choices. In my own case, it is pretty striking that I am now engaged in all of these projects that bring me very large amounts of meaning. Then, when I think about, would I have the reason that many people, I think, are drawn to having kids — to have additional meaning in their lives — it’s not something that appeals to me, like really motivates me.

I think I do have this extra responsibility when thinking about major decisions in my life, like, if I have kids, what is the impact of that on the world? On the one hand, it would take time away from other things I could be doing. On the other hand, perhaps it’s good. I do think it’s good to have a family. Perhaps that’s a good signaling thing. I do think those are relevant considerations.

In my own case, it’s at least, having a family is never something I’ve been particularly drawn to or excited about, so it’s currently not my plan. I think the fact that that will help me do more good in the world is a benefit, too.

COWEN: Here’s a very simple, practical question. Let’s say I’m a skilled lawyer, and I’m more or less a generalist. I could do a lot of different things, and I want to do some pro bono work for effective altruism. What should I actually do?

MACASKILL: If you’re a skilled lawyer?

COWEN: Skilled lawyer in the United States.

MACASKILL: Then I think there’re two obvious options. There are volunteering opportunities like high impact nonprofits, both within the effective altruism organizations or organizations we recommend, like Malaria Consortium, Deworm the World. The alternative is to work overtime and donate the profits as well, and that can be —

COWEN: I could sue someone. I don’t have to do malaria work, or GiveWell, or bed nets. I’m a lawyer. I could try to change laws by suing people. I have this special leverage.

MACASKILL: Oh, yes.

COWEN: What should I target?

MACASKILL: I think people potentially doing dangerous biotechnology research, things that could have large negative externalities. I don’t know about the law there. People who are patent trolls seem like that’s particularly harmful, it seems to me, like slowing down innovation. Perhaps legal work there could be very helpful as well.

I’m curious — this is a little bit more theoretical and depends on the nature of the lawyer. It’s plausible to me that at some point in our lifetimes, there will be a world government set up. That world government will have a constitution. The forming of the Constitution of the United States was enormously impactful from a very long-term perspective, and yet it was done over the course of about four months.

We can think in terms of these plastic moments that have a real impact over the future. I think that whoever’s writing the constitution of the world government — that is going to be a very influential moment. You could be one of the weird lawyers who are working on this that no one is currently working on, but will turn out to be very impactful if it did occur over the next century.

COWEN: You have a PhD from Oxford, right?

MACASKILL: That’s right.

COWEN: Given how much innovation comes out of top schools, why is it crazy to make big donations to them? I see EA people criticize this fairly often, like, “Oh, don’t give your money to Harvard. Give it to bed nets.” But given the power of innovation, including your own — Peter Singer has been connected to all these schools. Why not make big donations to top universities?

MACASKILL: I think two things. One is the standard line of criticism of donations to big universities — among charitable gifts, I don’t really think that’s one of the ones we should be criticizing for being enormously ineffective, compared to ScotsCare or things that are promoting the opera or something, or the US Golf Society.

On the other hand, if I’m going to promote research, I think a generic gift to Harvard is going to look pretty unlevered. I don’t think universities are, in general, in a great state in terms of how they can promote research compared to, say, independent research-focused organizations. In particular, when you’re donating to these universities with these enormous existing endowments, looking at what, in principle, happens there, where maybe even you’re trying to target the donation to some focused thing. Now, sometimes that can work, and then it’s good.

We have funded a bunch of things — research institutes at major universities, including Oxford. But if you’re just giving a generic gift, then probably you’re just giving to Harvard as a whole. That’s fine. I do think universities produce enormous amounts of value, but probably you’re missing out on an opportunity to do something more focused that pays off sooner, as well.

COWEN: Well, take gifts to the opera, which you mentioned. Why should we not build monuments to what has been our greatest and most profound creations, just to show people, “We did this. This is really important. We still think it’s important.” It’s a kind of elitism, but nonetheless, isn’t it important to keep those traditions alive and highly visible?

MACASKILL: Yes, it could be important. Is it going to pass the benefit-cost test? I’m open to anything. You’ve got to just show me the numbers, ultimately. I would guess —

COWEN: But there are not going to be numbers, right? We’re just guessing. While you hear Beethoven in the symphony, do you do something great 30 years later? We’re not going to have an RCT [randomized controlled trial] on that, right?

MACASKILL: We’re not going to have an RCT, but you can still at least say, “At best, this message will reach this many people. At best, this message reaching people will, let’s say, increase the impact of their lives by a certain percentage.” Then you could at least get an upper bound, where you think, “With the most optimistic assumptions, how much benefit would be created by this extra run of the opera?” My strong guess would be that, even with those optimistic assumptions, it would not look comparable to other good things that one could be doing.

COWEN: But it’s like [Derek] Parfit’s paradoxes in moral arithmetic. The single action doesn’t seem that important. If you’re a single marksman in a firing squad, well, you didn’t kill the person, but in a way you still did. No single performance of a great opera is really going to matter much in my view. But the fact that we have a network of operas performing The Magic Flute, Fidelio, keeping alive these 18th, 19th century ideals of liberty, freedom, the Masonic temple, glorious music, the importance of the exalted and the divine — that seems to me intuitively a super-high return, though I don’t ever think I’ll be able to measure it.

MACASKILL: I actually think that if you think that even in expectation, your additional project, let’s say one of the opera, is not making a difference, then that actually suggests that this class of projects is being overfunded. You should just take that at face value. The value doesn’t get inherited from the fact that it already has done a lot of good.

Take another example of voting, let’s say. There’s an evil candidate and a good candidate, we’ll suppose. Should I vote in the election? If I think, “Oh, maybe, actually, it could go either way,” then I think often the answer is “yes” because there’s some chance that your vote will be decisive. That’s worth enormous amounts of value.

If, however, it’s already 95 percent towards the good candidate in terms of votes, and you’re just absolutely sure that voting for the good candidate will not make a difference, then I think the main argument, and by far the main argument, is undermined because it’s already over-determined that this good thing is going to happen, and so you adding your extra weight is not making the world any better.

COWEN: How should it matter for our moral calculations if we think we might be living in a simulation?

MACASKILL: I think it potentially matters in a lot of ways. It gets into what seems like esoteric topics in decision theory. Two different views of decision theory: causal decision theory and non-causal decision theory.

Causal decision theory says, “I should care about what I cause.” If so, then, if I’m living in a simulation, the argument for taking the very long-term future seriously gets a massive penalty, at least, because those people in the future who are simulating us, who are interested in how did things go down at this crucial moment in history when human-level artificial intelligence gets built, and so on — once they’ve got that information, it’s much less likely that they’re going to keep simulating things.

Things would get a lot more boring, and computation is expensive. So, if we’re living in a simulation, the future is probably going to be a lot shorter, and therefore, the causal impact of my actions is much lower.

If, however, you’ve got non-causal decision theory, where I don’t take into account just the causal effects of my actions, but also, what evidence do I get about how other people will behave, then I should think, “Well, even if I’m in a simulation, if I do such-and-such a thing, that is also giving the evidence that the Will who is in the real world, the non-simulated Will, with all of this important huge consequences in front of him, he will do such-and-such an action too.” So, for non-causal decision theory, it makes much less of a difference.

Now, I’m someone who tends to prefer causal decision theory, so I guess I think two things. One, if we’re in a simulation, all bets are off because who knows now what implications you’re having. Secondly, maybe you also are much more likely to favor near-term actions rather than long-term actions because helping the simulated suffering person now, well, that’s a good thing that you’re doing. Trying to positively impact the long-term future is not something that will actually occur because the simulation is likely to get shut off.

COWEN: Couldn’t it be this convex returns to time that the simulation might be likely to run for much longer, at least in terms of subjective time? Then, if all we have is the so-called real physical universe, and you should care about the long run much more, but there’s this insuperable epistemic problem — you don’t know what the simulators want or even people in other simulations. There’s quite possibly lots and lots and lots of them, so you’re paralyzed for this other reason, just you don’t know anything. What you want seems to now be smaller than if it’s just us, Mars, and Venus.

MACASKILL: Yes. I think that’s pretty plausible if you’re in a simulation that is just, like I said, all bets are off and we don’t really know. Maybe that means that no matter how confident you are that you’re in a simulation, you should act as if you’re not, because 99 percent you’re in a simulation — it’s like nihilism. It’s like, “Well, who knows what the impact of any of our actions are?” One percent that you’re not in a simulation, and then you just do the kind of things that seem best.

On this issue of, “Oh, maybe it’s convex,” maybe the simulation goes even longer — that’s in this category of things that, again, feel to me like they’re low but also extremely speculative probabilities that feel like crazy town. It’s not just the simulation ones. There are other thoughts you might have. We’ve mentioned infinite ethics as well. Other thoughts you might have that would lead to even more value in the future but seem extremely implausible. Here’s another one, which is, you are in favor of speeding up economic growth because that has many benefits for not just now, but many centuries to come.

My response to that would be, well, at some point, economic growth would plateau, maybe not in a few centuries, but certainly by 10,000 years’ time. We can’t just keep growing. The more important thing is to either change the values that guide the future or ensure that we have a future at all because that’s a difference that really persists for all time.

Here’s a response you could make, Tyler, which is, we shouldn’t be confident, we shouldn’t be certain, not 100 percent certain that economic growth will plateau. Maybe it just keeps going forever and ever and ever until 100 trillion years, when the last stars burn out. What’s more, that’s where all the value is, because if economic growth can keep going for so long, then that’s huge amounts of value, way more than if we merely get a few thousand years of growth.

My response to that is, man, this just seems like super brittle, low probability because it seems so implausible to me that we could get hundreds of trillions of years in technological progress and improving well-being. So, I have to admit —

COWEN: What about the simple response that higher economic growth today gives you better institutions, and that also serves to minimize existential risk. Look at the countries with poor growth records. None of them seem to have the institutions to fight off a real threat to humanity, right?

MACASKILL: Yes.

COWEN: They’re just a simple argument for growth being a priority?

MACASKILL: Yes. Different argument. Then I would focus less on growth per se. But there is something that I do buy, and at some margin, I think it is what we should be doing, and I’ve done a bit of it so far, which is, just like, okay, it’s hard to predict the future. We’re going to get lots of unexpected events. There are some things that just correlate, well, we’re onto a good thing, like improved technologically delivering growth, good institutions, democracy, liberalism, more cooperation, higher trust in societies — these are just genetically good — and innovation.

From the sheer track record of how helpful these have been over the last 200 years, let’s just keep pushing on that. That’s the kind of view that I think I’m most sympathetic to in terms of the moral-progress studies worldview because I do think that’s good for the long-term future. It’s like, can you beat the market? I think probably we can, actually, but at some margin, that’s what I think long-termism turns into. It looks more common sense-y, like building a flourishing society.

COWEN: Now, I’m going to use the word hinge-y to describe the quality of living in a time that is highly influential, where that influence may very well persist for a long period of time. Do people in their own eras know when they are living in especially hinge-y eras? Or are they clueless?

MACASKILL: I think we know a lot more now than we did in the past. I think people in the past would’ve been pretty clueless. They didn’t have a good sense of how long . . . The fact that the universe is so truly enormous, so big, and yet uninhabited, is actually a very recent idea, like a little over 100 years that we’ve really appreciated that. So, people in previous times may have thought that they were living in extraordinarily hinge-y times. The early Christians — extraordinarily hinge-y time. Kingdom was going to come in one or two generations. [laughs]

COWEN: It was, I would say.

MACASKILL: Oh, you think they were right and we’ve just been lucky?

COWEN: No, but Christianity has proven extremely important, and it’s still with us, right?

MACASKILL: Yes.

COWEN: It’s the foundation for Western prosperity.

MACASKILL: Yes, actually, I do agree they were a very hinge-y time, just not for the reasons they thought.

COWEN: That gets to the epistemic problem. Do people ever know? Like people in 1720 — how many of them were sitting around saying, “We’re on the cusp of an Industrial Revolution”? That was a hinge-y event. I’m not saying no one knew, but —

MACASKILL: The Founding Fathers were aware. John Adams has this great quote of the lasting importance that we build the institutions of America correctly because they may well not wear out for thousands of years. If they’re built incorrectly, then they will not return — except by accident — to the right path. So, I think they actually did seem pretty aware of the importance of what they were doing.

I think two things. One, I think this should give us a lot of humbleness or humility in terms of taking actions today.

It’s perfectly plausible to me, maybe even more likely than not, that in a hundred years’ time, people would look back and say, “Oh, wow, these are the people they cared about, AI and worried about bio weapons,” in the same way as I look back at John Stuart Mill at the end of the 19th century, who was fighting for future generations by trying to keep coal in the ground because he thought that we were going to run out of coal very quickly and that would impoverish future generations.

I think that’s actually quite likely. I think that gives a good argument for trying to do much more robustly good actions or trying to build up the resources that will be very useful in a hundred years’ time. Increasing the number of impartially concerned and altruistically motivated and carefully reasoning thinkers, for example.

But I also think we have much better evidence than those people in the past. We have a much better understanding of physics, a better understanding of social science, of probability, even of ethics, and I think that gives us —

COWEN: But say, foreign policy. We can’t predict anything. I don’t know anyone good at predicting foreign policy outcomes. Those are maybe the most important issues in the world. If we can’t predict foreign policy two years out, how well can we understand our own hinge-yness?

MACASKILL: Here’s a general argument for thinking that we’re at least plausibly at a very influential time, and I think this argument works. I don’t think it means we’re at the most influential time. That’s a substantially harder argument to make. It’s just that the rate of technological progress is very high compared to history, and also very high compared to what must happen in the future.

The argument is just that if we had economic growth of 2 percent per year for 10,000 years, then we would be producing 10 to the 87 world’s worth of economic output. That doesn’t seem . . . We’d be producing that. There are 10 to the 67 atoms within 10,000 light-years. So, we would be producing about a trillion trillion current civilization’s worth of economic output for every atom within 10,000 years. That just seems like, okay, that can’t happen. Technologically delivered economic growth is going to have to decrease when we look to the future.

That actually suggests we’re living at a time of unusually high technological change. This is actually a very tiny window. It’s only been like 200 years that we’ve gotten anything close to this level of tech progress. Ten thousand years is also a very tiny window compared to hundreds of thousands of years that we’ve been around so far, and the millions, billions, or trillions of years we could be around in the future.

That just seems like a really pretty good reason for thinking, okay, there’s at least a decent probability for thinking we’re at an unusually hinge-y time. Again, maybe not the most influential time, but something that’s pretty distinctive if you tell the story of the whole of civilization, not just the past, also the future.

COWEN: Now, on this question, I’m looking for a sociological answer. It’s really striking to me how many very smart young people right now are attracted to the effective altruism movement, I think way more than a lot of outsiders realize. I’m sure you’ve seen this. Why is that the case? The mere fact that you all might be correct, I do not consider a satisfactory answer, to be clear, because in general, it’s not the case that the correct movements are always attracting the smartest people. Why is this happening now?

MACASKILL: I was absolutely going to say, well, maybe we’ve just got the best arguments. Okay, fine, I’ve got to give an entirely sociological explanation.

One is, I think there just was an untapped market of altruistically minded people. I think that especially this kind of effective altruism is much broader than consequentialism, but consequentialism-flavored ethical views correlate with being very high educationally performing.

I think it’s also the case that something that correlates with being high educationally performing is just being secure about your material needs, whether that’s because you come from a better family, but also because your career prospect is looking pretty good. I think of altruism like a luxury good. The more secure you are, the more you can focus on that.

One thing is, we’ve tapped into this market that I think wasn’t otherwise being tapped into. Then a second thing I could say is that the topics are just very intellectually interesting and a very unusual intersection of intellectually interesting and extremely impactful and important for one’s own life, and in fact, how the world should be. You’re making arguments about paradoxes in population ethics and moral philosophy, and the resolution there is really going to make a difference to what you should do. Perhaps that’s more attractive to the nerds of the world, too.

COWEN: Let me make a sociological observation of my own. If I think about making the world a better place, I think so much about so many things being downstream from culture, that we need to think about culture. This is quite a messy topic. It’s not easily amenable to what you might call optimization kinds of reasoning. Then, when I hear EA discussions, they seem very often to be about optimization — so many chats online or in person, like how many chickens are worth a cow, the bed net versus the anti-malaria program.

I often think that this is maybe my biggest difference with EA — that EA has the wrong emphasis, pushing people into the optimization discussions when it should be more about improving the quality of institutions and management everywhere in a way that depends on culture, which is this harder thing to manage. This may even get back to subsidizing Mozart’s Magic Flute. There’s something about the sociology of EA that strongly encourages, especially online, what I would call the optimization mindset. What’s your response to that?

MACASKILL: I think I’m going to surprise you and agree with you, Tyler. I’m not sure it’s about optimization, but here’s a certain critique that one could make of EA, in general or traditionally. It’s like, hey, you have a bunch of nerds. You have a bunch of STEM people. The way your brains work will be inclined to focus on technology or technological fixes and not on mushy things, like institutions and culture, but they’re super important. I, at least, think that that criticism has a lot going for that.

I don’t want to wholesale endorse it because often, you just can have technological fixes to what are even sociological problems, where the risk of an engineered pandemic killing hundreds of millions of people. That is, in part, a sociological or political problem because it’s going to be an individual that builds it and does it. We could just solve it with technology, though — early warning detection systems, far UVC lighting that sterilizes wounds. There doesn’t need to be a match between political or sociological problems and political or cultural responses.

But I do think that culture is just enormously important. That’s something I’ve changed my view on and appreciated a lot over the last few years, just as I started to learn more about history, about the cultural evolution literature, about Joseph Henrich’s work and our understanding of humanity as a species. Actually, one of my favorite and most underrated articles is by Nathan Nunn. It’s called “History as Evolution,” which I think is extremely good.

COWEN: Yes.

MACASKILL: Actually, my understanding of human beings — rather than homo economicus, which are mainly motivated by self-interest; you understand that in terms of income — at least when you’re looking at a much broader scale, I think we’re much more like homo culturalis, where people have a view of how the world should be, and they go out and try to make that vision happen.

I think that can have hard-to-measure and very long-run but important effects. I actually see effective altruism, as a whole, as cultural innovation. It’s creating this new subculture, a culture of people who are impartial and altruistically motivated, extremely concerned about the truth and having accurate beliefs.

That is a way in which I think effective altruism could have a big impact, in the same way as the scientific revolution was primarily a cultural revolution — I shouldn’t use that term — primarily a revolution in culture, where people suddenly started innovating, and they started to think in a certain way. It’s was like, “Oh, we can do experiments, and we can test things, and we can tinker.” I actually see effective altruism as a cultural innovation that could drive great moral progress in the future.

Then, should we be doing more in terms of cultural change? One thing I’ll say is, people are doing quite a lot of it — myself promoting concern for future generations in this book, What We Owe the Future, is doing that. An awful lot of people are going to promote cultural change around attitudes to non-human animals.

It is hard to measure, but I think there’s a very big difference between having an optimization mindset — do the best — and having a mindset that’s like, “Therefore, we always need to be able to measure what we’re doing and have some metric that we’re optimizing towards.” That latter thing, I think, is a bit of a straw man against EA.

COWEN: Will the EA movement avoid Conquest’s Second Law, namely that institutions not explicitly designed to be right-wing end up becoming left-wing? It’s happened to all these major foundations: Rockefeller, Ford, Pew. You can go all the way down the list. Whether you like that or not, it seems to be an empirical regularity. Will it happen to EA?

MACASKILL: Yes, I’d be curious about what the underlying mechanism is for those other foundations. It’s not something I know about. It’s interesting that if you look at the demographics and political views of people in effective altruism, even though we’ve really not been selecting for that at all, we’ve been selecting for people who care about things like, does it make sense to spend your money to pay for bed nets to save lives in poor countries? That’s certainly not a politically hot-button issue.

There does tend to be a pretty systematic tendency towards being very socially liberal and being economically moderate or something. There’s still obviously a range on both of those cases, but there certainly is a particular tendency. My guess is that that’s the bigger factor. Like inertia would keep effective altruism broadly in that category. But perhaps you could convince me otherwise if I understood what’s the mechanism by which these other foundations are shifting left-wing.

COWEN: For our final segment, do you have time for a quick round of underrated versus overrated?

MACASKILL: Of course.

COWEN: Okay. Bishop Berkeley, the philosopher — overrated or underrated?

MACASKILL: Underrated because idealism, in general, I think, is underrated as a metaphysical view.

COWEN: And that’s related to thinking we might be living in a simulation or not.

MACASKILL: Yes. In general, the fact that it’s our experiences that we have direct awareness of and the idea that maybe there’s no external world. I think there’s more on the table than maybe more than philosophers give it credit for.

COWEN: You’re from Scotland. Adam Smith’s Theory of Moral Sentiments as a book — over- or underrated?

MACASKILL: I’ll have to confess, I haven’t read it. My guess is that it’s underrated because, from people I know and respect — they think of it very highly.

COWEN: Quine, the philosopher — over- or underrated?

MACASKILL: Overrated, I’m afraid.

COWEN: Why?

MACASKILL: “Two Dogmas of Empiricism,” for example, is his most famous article. There’s the analytic–synthetic distinction: views that are true in virtue of meaning, views that are true empirically. He’s like, “I don’t believe in this distinction.” His argument is just, “Well, can you define what it means for something to be true in virtue of meaning?”

He’s like, “This definition is circular. This definition is circular.” I don’t think it’s a very good argument. I think you can clearly have positions that involve primitive concepts without being able to define them in non-circular terms. That’s regarded as one of the great papers of analytic philosophy over the last century. I think the arguments are pretty weak.

Then, more generally, he has this tendency of writing these articles, where the arguments aren’t very good, but he ends with some vivid picture or metaphor, and people don’t really understand the arguments because they’re often quite technical, but then really like the metaphor and people think, “Oh, he’s great.”

COWEN: Buildering — overrated or underrated? And you need to tell us what it is.

MACASKILL: Buildering is also known as urban climbing. It’s where you just basically climb buildings in urban environments. It’s something I used to do as a younger man. It is very dangerous, so I’m going to say it’s overrated. In the book, I talk about how I nearly killed myself doing that. A lesson from that, too, with long-term future of humanity.

COWEN: Thus, we need to worry about existential risk.

MACASKILL: Exactly.

COWEN: Last question to close this out. What is it you will do next? Just to remind our readers, What We Owe the Future, Will’s new book — excellent. One of the most important books of the year. Will is one of the most influential and important philosophers in the world. Please, do buy it and read it.

But tell us also, what will you do next?

MACASKILL: Thanks so much, Tyler. I have a few options on the table. I’ve been helping Sam Bankman-Fried launch his foundation, the Future Fund, which has been going well. We’ve been able to move a lot of money, about $140 million this year. Possibly, I will keep working more on that. That’s one option. Second is just doubling down on books and promotion of ideas. That’s what I truly love. I enjoy having back and forth with people like yourself.

There’s plenty more. I’d be interested in writing another book that’s kind of a follow-up to Doing Good Better, that is really explaining what is the effect of an altruism community, and actually taking an introduction into that that’s less from abstract principles and arguments, and more just via what, actually, the people in that community — what they’re doing.

Then a final option that I consider is some new college or university. Really trying to take some of the brightest people from all around the world, especially in countries where very bright, promising, morally motivated people are being missed. You could be extremely intellectually talented in rural India, and maybe you can make your way out, but it’s a challenge, at least.

Then, just trying to give the very best all-round education possible, hiring people who are dedicated as teachers rather than having their attention split between research and teaching, which is the standard university model. Also, using certain techniques that we have discovered that aren’t very widely used within education, to try and accelerate people’s learning as fast as possible. If that worked well in this one instance, then perhaps it could become a much wider idea.

COWEN: Will MacAskill, congratulations again on the book, and thank you very much.

MACASKILL: Thank you so much, Tyler.