Sam Harris and Will MacAskill: Podcast transcript (2020)

By Aaron Gertler 🔸 @ 2021-07-03T21:48 (+30)

This is the transcript for probably the most impactful EA podcast ever. I highly recommend reading or listening to absorb what makes the content so good.

I've added my own subheadings.

Transcript

Introduction + GWWC "pitch"

Sam Harris: Welcome to the Making Sense podcast. This is Sam Harris. Today I'm bringing you a conversation that I originally recorded for the Waking Up app. We released it there as a series of separate lessons a few weeks ago. The response has been such that I wanted to share it here on the podcast, and put it outside the paywall. It seems like a better holiday message than most. 

As I think many of you know, Waking Up isn't just a meditation app. At this point, it's really the place where I do most of my thinking about what it means to live a good life. 

This conversation is about generosity and how we should think about doing good in the world. Increasingly, I'm looking to use this podcast and the Waking Up app to do more than merely spread what I consider to be good ideas. That's their primary purpose, obviously, but I want to help solve some of the worst problems we face more directly than by just talking about them. I want to do this systematically, really thinking through what it takes to save the most lives, reduce the worst suffering, or mitigate the most catastrophic risks. 

To this end, I've taken the pledge over at Giving What We Can, which is the foundation upon which effective altruism is based. (Effective altruism is a movement that was started by the philosophers Will MacAskill and Toby Ord, both of whom have been on the podcast.) For the pledge, one agrees to give a minimum of 10% of one's pre-tax income to the most effective charities. I've also taken the Founders Pledge, which amounts to the same thing, and Waking Up has become one of the first corporations to pledge a minimum of 10% of its profits to charity.

The thinking behind all of this is the subject of today's podcast. Of course, there is a bias against speaking about this sort of thing in public, or even in private. It's often believed that it's better to practice one's generosity anonymously, because then you can be sure you're doing it for the right reasons. You're not trying to just burnish your reputation. 

As you'll hear in today's conversation, there are very good reasons to believe that this is just not true, and that the imagined moral virtue of anonymity is something that we really need to rethink. In fact, I've just learned of the knock-on effects of the few times I have discussed my giving to charity on this podcast, and they're surprisingly substantial. To give you a sense of it, last year I released an episode titled “Knowledge and Redemption,” in which we discussed the Bard Prison Initiative, based on the PBS documentary that Lynn Novick and Ken Burns did. Lynn was on that podcast.

At the end, I think I asked [listeners] to consider supporting that work, too. Together we donated $150,000 based on that one episode alone. I've also occasionally mentioned on the podcast that I donate each month to the Against Malaria Foundation, and it was actually my first podcast conversation with Will MacAskill that convinced me to do that. I do it through the charity evaluator GiveWell.org. And the good people at GiveWell just told me that they've received over $500,000 in donations from [my listeners]. And they expect another $500,000 over the next year from podcast listeners who have set up their donations on a recurring basis. So, that's $1 million, and many lives saved, just as a result of some passing comments that I've made on the podcast. 

I've also heard from Will MacAskill's people over at Giving What We Can, where I took their 10% pledge — and which I haven't even spoken about much — that hundreds of you have taken that pledge, unsolicited by me, but specifically attributing this podcast and the Waking Up app as the reason. That's hundreds of people, some of whom may be quite wealthy, or will become wealthy, who have now publicly pledged to give a minimum of 10% of their pre-tax income to the most effective charities every year for the rest of their lives. That is awesome. 

All of this inspired me to share this conversation from the Waking Up app. Again, this is a fairly structured conversation with the philosopher Will MacAskill. Some of you may remember the conversation I had with Will four years ago on the podcast. That was Episode 44. And that's a great companion piece to today's episode, because it gets into some of the fundamental issues of ethics. 

Today's conversation is much more focused on the actions we can all take to make the world better, and how we should think about doing that. Will and I challenge some old ideas around giving, and we discuss why they're really not very good ideas in the end. You'll also hear that there's still a lot of moral philosophy to be done in this area. I don't think these issues are fully worked out at all and that's really exciting. There's a lot to talk about here. There's [plenty] for moral philosophers to actually do that might really matter to the future of our species. 

In particular, I think there's a lot of work to be done on the ethics of wealth inequality, both globally and within the wealthiest societies themselves, and I'm sure I will do many more podcasts on this topic. I suspect that wealth inequality is producing much, if not most, of our political conflict at this point, and it certainly determines what we do with our resources. I think it's one of the most important topics of our time.

Anyway, Will and I cover a lot here, including how to choose causes to support, and how best to think about choosing a career so as to do the most good over the course of one's life. The question that underlies all of this, really, is: How can we live a morally beautiful life? That is more and more what I care about, and what the young Will MacAskill is certainly doing, as you will hear. 

Finally, I want to recognize all of you who have made these donations and pledges, as well as the many of you who have been supporting my work these many years, and the many of you who have become subscribers to the podcast in the last year. I couldn't be doing any of these things without you, and I certainly look forward to what we're going to do next. 2021 should be an interesting year. My deep thanks to all of you. 

And now I bring you Will MacAskill. Will, thanks for joining me again.

Will: Thanks so much for having me on.

Sam: I just posted a conversation that you and I had four years ago on my podcast onto Waking Up as well, because I thought it was such a useful introduction to many of the issues we're going to talk about. It was a different conversation because we got into very interesting questions of moral philosophy that I think we probably won't focus on here, so it seems like a great background for the series of lessons we're now going to sketch out in our conversation. 

For those who have not taken the time to listen to that just yet, maybe we should summarize your background here. Who are you, Will? And how have you come to have an opinion about altruism, generosity, and what it means to live a good life? Give us your potted bio.

Will's life story + the origins of EA

Will: I grew up in Glasgow and I was always interested in two things. One was ideas — and then, in particular, philosophy, when I discovered that. The second was helping people. As a teenager, I volunteered running summer camps for children who were impoverished and had disabilities, and I worked at an old folks home. Then I came across the arguments of Peter Singer — in particular his argument that we have a moral obligation to give away most of our income to help people in very poor countries, simply because such a move would not be a great burden on us. It would be a financial sacrifice, but not an enormous sacrifice in terms of our quality of life, and it could make an enormous difference for hundreds of people around the world. 

That moved me very much, but being human I didn't really do very much on the basis of those arguments for many years, until I came to Oxford, where I met another philosopher named Toby Ord. He has very similar ideas and was planning to give away most of his income over the course of his life. Together, we set up an organization called Giving What We Can, which encouraged people to give at least 10% of their income to those organizations that we think can do the most good. Sam, I know that you have now taken that 10% pledge, and I'm delighted that's the case. 

[I certainly didn’t expect the Giving What We Can pledge] to be that big of a deal. I was just doing it because I thought it was morally very important. But it turns out that a lot of people had similar ideas, and Giving What We Can acted a bit like a lightning rod for people around the world who were motivated to try to do good, but also to do it as effectively as possible. 

At the time, we had a set of recommended charities. There was also the organization GiveWell, whose work we leaned extremely heavily on when making recommendations about which charities that we thought would do the most good. Effective altruism at the time focused on doing good for people in extreme poverty. Since then, [the movement has broadened significantly]. Now most people in the effective altruism community who are trying to do good are doing so via their career, and there's a much broader range of cause areas. Animal welfare is a big focus, as, increasingly, are issues that might affect future generations in a really big way — the risks to the future of civilization that Toby Ord talked about when he was on your podcast.

Sam: I have a factoid in my memory, which I think is from your original interview with Tim Ferriss on his podcast. Am I correct in thinking that you were the youngest philosophy professor at Oxford?

Will: Yes, the precise fact is when I joined the faculty at Oxford, which was at age 28, I'm pretty confident that I was the youngest associate professor of philosophy in the world at the time.

Sam: Oh, nice. Well, no doubt you're quickly aging out of that distinction. Have you lost your record yet?

Will: Yeah. I'm an old man at 33 now, and I definitely lost that distinction a few years ago.

Sam: It's great to talk to you about these things, because as you know, you've greatly influenced my thinking. You directly inspired me to start giving a minimum of 10% of my income to charity and also to commit Waking Up, as a company, to give a minimum of 10% of its profits to charity. 

I'm very eager to have this conversation because it still seems to me that there's a lot of thinking yet to do about how to approach doing good in the world. There may be some principles that you and I either disagree about, or maybe we'll agree that we just don't have good enough intuitions to have a strong opinion one way or another. But it seems to me to be territory that can benefit from new ideas and new intuition pumps. There's just a lot to be sorted out here.

As I said, we will have a structured conversation here, which we'll break into a series of lessons. This is an introduction to the conversation that's coming. All of this relates specifically to this movement that you started: effective altruism. We will get very clear about what that means and what it may yet mean, but this does connect to deeper and broader questions like “How should we think about doing good in the world in general?”, “What would it mean to do as much good as possible?”, and “How do those questions connect to questions around what sort of person I should be, or what it means to live a truly good life?” 

These are questions that lie at the core of moral philosophy, and at the core of any person's individual attempt to live an examined life, develop an ethical code, and form a vision of what a good society might be. We're all personally attempting to improve our lives, but we're also trying to converge on a common picture of what it would mean for us to be building a world that is making it more and more likely that humanity is moving in the right direction. We have to have a concept of what the goal is, or what a range of suitable goals might be, and we have to have a concept of when we're wandering into moral error personally and collectively. 

There's a lot to talk about here. Talking about the specific act of trying to help people, trying to do good in the world, really sharpens our sense of the stakes and the opportunities. I'm really happy to be getting into this with you. 

Addressing charity skeptics

Before we cover what effective altruism is, I think we should address a basic skepticism that people have — even very rich people, or perhaps especially rich people. It's a skepticism about altruism itself, and in particular, a skepticism about charity. I think there are some good reasons to be skeptical about charity, at least in a local context. And then there are some very bad reasons. 

I want to lob some of these reasons to you, and then we can talk about them. I meet (and would imagine you've encountered) some very fortunate people who have immense resources and can do a lot of good in the world, and who are fundamentally skeptical about giving to charity. 

One bad reason for this skepticism that I always encounter is something that we might call “the myth of the self-made man.” The idea is that it's somehow an ethically impregnable position to notice all of the ways in which you are responsible for all of your good luck, no matter how distorted this appraisal might be. You weren't born into wealth; you made it all yourself; you don't owe anyone anything. [People who subscribe to this myth believe that] giving any of the resources you’ve acquired to people who are less fortunate than you is not helping them in the end. You want to teach people to fish, but you don't want to give them fish. There's an Ayn Randian ethic of radical selfishness combined with a vision of capitalism wherein free markets can account for every human problem simply by all of us behaving like atomized selves, seeking our own happiness. 

People who have listened to me will not be surprised to hear that I think there's something deeply flawed in this analysis. But what do you do when someone hits you with the ethical argument that they're self-made, that everyone should aspire to also pull themselves up by their own bootstraps, and that we falsify something about the project of living a good life by even thinking in terms of altruism and charity?

Will: I think there are a few things to say here. First, I do disagree with the premise of someone being a self-made man. I can predict 80% of the information about your income just from your place of birth. You could be the hardest working Bangladeshi in the world, but if you're born into extreme poverty in Bangladesh, it's going to be very difficult indeed to become a billionaire. I agree with you that that's a myth. 

But even if we accept that idea, the fact that you have rightly earned your money yourself doesn't mean that you don't have any obligations to help other people. Peter Singer’s very famous thought experiment is this: You walk past a pond. It's a fairly shallow pond. You could easily wade in as deep as you like, and you can see that there's a child drowning there. Now, perhaps it's the case that you’re an entirely self-made man; perhaps it's the case that the suit that you’re wearing is one that you justly bought yourself. That seems neither here nor there with respect to whether you ought to try to wade in and save this child. I think that's quite an intuitive position. 

In fact, this ideal of self-actualization — of being the best version of yourself that you can be — is an admirable version of this otherwise quite dark perspective on the world. I think that part of being a self-actualized, authentically-living person is living up to your ideals and principles. Most people in the world want to be a helpful, altruistic person acting in accordance with their deepest values. That means living an authentic and a self-actualized life.

The second point is about whether charity is actually harmful because it makes people rely on bailouts. In the case of public goods or externalities, there is market failure; markets don't necessarily do what they ought to do. Perhaps you want government to step in and provide police, or defense, or street lights, or taxes against climate change. Even the most hardcore libertarian flea-market proponent should accept that's a good thing to do sometimes.

But there are also cases of democratic failure. What if the people are not protected by functioning democratic governments? That's true for people in poor countries. That's true for non-human animals. That's true for people who are yet to be born and don't have a vote. Future people are disenfranchised. We shouldn't expect markets or the government to be taking appropriate care of those individuals who are disenfranchised by both the market, and even by democratic institutions. What else is there apart from philanthropy?

Sam: Yeah. I've spoken a lot about the myth of the self-made man. Whenever I criticize the notion of free will, it's just obvious that however self-made you are, you didn't create the tools by which you made yourself, right? If you are incredibly intelligent or have an immense capacity for effort, you didn't create any of that about yourself, obviously. You didn't pick your parents, you didn't pick your genes, you didn't pick the environmental influences that determined every subsequent state of your brain, right? You didn't create yourself. You won some sort of lottery. 

But as you point out, Will, where you were born also was a major variable in your success. It’s very likely that you didn't create the good luck not to be born in the middle of a civil war in a place like Congo, Syria, or anywhere else, which would be hostile to many of the things you now take for granted.

Frankly, there's something obscene about not being sensitive to those disparities. As you point out, living a good life and being the sort of person you rightly want to be, has to entail some basic awareness of those facts and a compassionate impulse to make life better for people who are much less fortunate. If your vision of who you want to be doesn't include being connected to the rest of humanity and having compassion — not even when it becomes proximate and you’re walking past Singer’s shallow pond, and you see someone drowning — then we have a word for it: it’s sociopathy or psychopathy. It's a false ethic to be so inured to the suffering of other people that you can just decide to close your accounts without even having to pay attention to it, all under the rubric of being self-made. 

None of this is to deny that in many cases, things are better accomplished by business than by charity, or by government than by charity. We're not denying any of that. I happen to think that building electric cars that people actually want to drive may be the biggest contribution to fighting climate change, or is certainly one of them, and may be better than many environmental charities have managed to muster. There are different levers to pull to affect change in the world. 

But what also can't be denied is that there are cases where giving some of our resources to people or to causes that need them more than we do is the very essence of what it means to do good in the world. That can't be disputed. Singer's shallow pond sharpens it with a cartoon example, but it's really not such a cartoon when you think about the world we're living in, and how much information we now have, and how much agency we now have to affect the lives of other people.

We're not isolated the way people were 200 years ago. It is uncontroversial to say that anyone who would walk past a pond and decline to save a drowning child out of concern for his new shoes or his new suit is a moral monster. None of us want to be that sort of person, and what's more, we're right to not want to be that sort of person. Given our interconnectedness, and given how much information we now have about the disparities in luck in this world, we have to recognize that although we're conditioned to act as though people at a distance from us — both in space and in time — matter less than people who are near at hand, if it was ever morally defensible, it's becoming less defensible because the distance is shrinking. We simply have too much information. There are just so many ponds that are in view right now, and to which a response is morally important. 

Obligation vs. opportunity

But in our last conversation, Will, you made a distinction that I think is very significant. It provides a much better framing for thinking about doing good. It was a distinction between obligation and opportunity. The obligation is Singer's shallow pond argument. You see a child drowning, and you really do have a moral obligation to save that child. There's just no way to maintain the sense that you're a good person if you don't. Then he forces us to recognize that we stand in that same relation to many other causes, no matter how distant we imagine them to be. But you favor the “opportunity” framing of racing to save children from a burning house. Imagine how good you would feel doing that successfully. Let's just put that into play here, because I think it's a better way to think about this whole project.

Will: Yeah, exactly. As I was suggesting earlier, for most people around the world — certainly in rich countries — if you look at your own values, one of those is being a good person. You can see this if you think about examples. You see a building on fire. There's a young girl at the window. You kick the door down, run in, and rescue that child. That moment would stay with you for your entire life. You would reflect on that in your elderly years and think, “Wow, I actually did something that was pretty cool.”

Sam: It's worth lingering [on that point], because everyone listening to us knows, down to their toes, that that would be, if not the defining moment in their life, in the top five. You could live to be 150 years old and that would still be one of the top five most satisfying experiences of your life. It's amazing to consider how opaque this is to most of us, most of the time, when we think about the opportunities to do good in the world.

Will: Exactly. Continuing this [line of thought], imagine if you did a similar thing several times. One week you save someone from a burning building, the next week you save someone from drowning, the month after that you see someone having a heart attack and you perform CPR, saving their life, too. You'd think, “Wow, this is a really special life that I'm living.” 

The truth is that we have the opportunity to be that kind of model hero — in fact, much more of a model hero — every single year of our lives. We can do that just by targeting our donations to the most effective charities, to help those people who are poorest in the world. We can also do it by choosing a career that's going to have a really big impact on the lives of others.

It seems very unintuitive because we're in a very unusual place in the world. It's only over the last few hundred years that there has been such a wild discrepancy between rich countries and poor countries, where people in rich countries have 100 times the income of the poorest people in the world, and where we have the technology to be able to change the lives of people on another side of the world — let alone the kind of technologies to imperil the entire future of the human race, such as through nuclear weapons or climate change. 

Our model instincts are just not attuned to that at all. They are just not sensitive to the sheer scale of what an individual is able to achieve if he or she is trying to make a really positive difference in the world.

When we look at the heroes of history, like the famous abolitionists William Wilberforce or Frederick Douglass, who campaigned for the end of slavery, and the amount of good they (or other great leaders) did, we think, “Wow, these are really special people because of the amount they accomplished.” I actually think that's just as attainable for many people around the world.

Perhaps you're not going to do as much as those who contributed to the abolition of slavery. But you are someone who can potentially save hundreds of thousands of lives, or make a very significant difference in the course of the future to come.

The basic definition of EA

Sam: That's a great place to start. Now we will get into the details. Let's get into effective altruism. How do you define it at this point?

Will: The way I define effective altruism is this: It's about using evidence and careful reasoning to try to figure out how to do as much good as possible, and then taking action on that basis. The real focus is on the most good. That's so important. People don't appreciate just how great the difference of impact between different organizations is. When we've surveyed people, they seem to think that the best organizations are maybe 50% better than typical charities, but that's not really the way of things. Instead, it's that the best charity is more like hundreds or thousands of times better than a typical organization. 

We see this across the board when comparing charities and different sorts of actions. For global health, you will save hundreds of times as many lives by focusing on anti-malaria bednets and distributing them, as you will by focusing on cancer treatment. In the case of improving the lives of animals, you'll help thousands of times more animals by focusing on factory farms than if you help animals by focusing on pet shelters. If you look at the risks to the future of civilization, man-made risks like novel pandemics are plausibly a thousand times greater in magnitude than natural risks like asteroids. 

That means we can focus not just on doing some amount of good, but on doing the [greatest amount of good]. This is so important. It's easy to [ignore] how wild this fact is. Imagine if this [concept were applied to] consumer goods. At one store, a beer costs $100, and at another, it costs $0.10. That would be completely mad, but that's the way things are in the world of trying to do good. It's like there’s a 99.9% off sale, or a 100,000% extra fee, depending on which organizations you focus on. [Donating to the most effective ones is] the best deal you'll ever see in your life. And that's why it's so important for us to highlight this.

Sam: Okay. I’ll summarize effective altruism for myself now. This is a working definition, but it captures a few of the areas of focus and the difference between solving problems with money and solving problems with your time or your choice of career. 

In your response to my question, you illustrated a few different areas of focus. You could be talking about the poorest people in the world, or you could also be talking about long-term risk to all of humanity. The way I'm thinking about it now is that it’s a question of using our time and/or money to do one or more of the following things: 

The question of effectiveness [entails], as you point out, many different levels of competence and clarity around goals. There may be very effective charities that are targeting the wrong goals, and ineffective charities targeting the right ones. This does lend some credence to the skepticism about charity itself that I referenced earlier.

There's one example here, which does a lot of work in illustrating the problem. This is something that you discuss in your book Doing Good Better, which I recommend that people read. Remind me of the ill-fated PlayPump.

The PlayPump story; how businesses and nonprofits fail differently

Will: Yeah. The now infamous PlayPump was a program that got a lot of media coverage in the 2000s and even won the World Bank Development Marketplace Award. The idea was based on identifying a true problem — that many villages in Sub-Saharan Africa do not have access to clean drinking water. The idea was to install a kind of children's merry-go-round or roundabout, for children to push, jump on, and spin around. That would harness the power of children's play in order to provide clean water for the world. By pushing this merry-go-round, you would pump up water from the ground. It would act like a hand pump, providing clean water to the village. 

Some people loved this idea. The media loved it, and said, “Providing clean water is child's play,” or pun on it [in some other way]. It was a little hit. But this intervention was a disaster. None of the local communities were consulted about whether they wanted a pump. They liked the much cheaper, more productive, easier-to-use Zimbabwe hand pumps that were sometimes, in fact, replaced by these PlayPumps. Moreover, the PlayPumps were sufficiently inefficient that one journalist estimated the children would have to play on the pump for 25 hours per day in order to provide enough water for the local community, but obviously children don't want to play on a merry-go-round all of the time. So, it would be left to the elderly women of the village to push this brightly colored play pump around and around.

Sam: One of the problems was that it didn't actually function like a merry-go-round, where you gather momentum and keep spinning. It actually was work to push.

Will: Yes, exactly. The point of a children's merry-go-round is that you push it, and then you spin. If it's very well greased [and operating well], it spins freely. But you need to be providing energy into the system in order to pump water up from the ground. It wouldn't spin freely in the same way. It was enormous amounts of work, and children would find it very tiring.

Sam: So it was a fundamental misconception of engineering to deliver this pump in the first place?

Will: Yeah, absolutely. Then there's the question of why you would think you can just go in and replace something that has already been quite well-optimized to the needs of the local people. If this was such a good idea, you must ask, “Why wasn't it already invented? Why wasn't it already popular?” If there's not a compelling story about it being a public good or something, then there's a reason why it wouldn't have already been developed. 

There’s also the fact that the main issue, in terms of water scarcity for people in the poorest countries, is access to clean water (more so than access to water). That’s why programs like Chlorine Dispensers for Safe Water install chlorine at the point of source. So, at these hand pumps, there are chlorine dispensers, allowing people to easily put chlorine into the jerrycans that they use to carry water. That sanitizes the water. These [dispensers] are much more effective, because the issue is really dirty water, rather than access to water, most of the time.

Sam: This functions as a clear example of the kind of things that can happen when the story is better than the reality of a charity. If I recall correctly, there were celebrities who got behind this and I think they raised tens of millions of dollars for the PlayPump. Even after the fault in the very concept was revealed, they persisted. They got locked into this project. I can't imagine that it persists to this day, but they kept doubling down in the face of the obvious reasons to abandon it, including kids getting injured on these things and having to be paid to run them. It was a disaster any way you look at it. 

This is what happens in various charitable enterprises, and this is what you want to avoid if you're going to be effective as an altruist.

Will: Yeah, absolutely. As to whether the PlayPump [persists], I haven't checked recently. But when I did a few years ago, they were still going. They were funded mainly by corporations like Colgate Palmolive, and obviously in a much diminished capacity, because many of these failures were brought to light (that was a good part of the story). 

But what it does illustrate is a difference between the world of nonprofits and the business world. In the business world if you make a really bad product, and if the market's functioning well, then the company will go out of business. It just won't be able to sell the product, because the beneficiaries of the product are also the people paying for it. In the case of nonprofits, the beneficiaries are different from the people paying for the goods. So there's a disconnect between how well you can fundraise and how good the program is that you're implementing. The sad fact is that bad charities don't die — or not nearly enough of them do.

Sam: Actually, that brings me to a question about perverse incentives that I think animates the more intelligent skepticism [about charities]. It is on precisely this point that charities, good and bad, can be incentivized to merely keep going. Just imagine a charity that solves a problem — for example, let’s say your charity is trying to eradicate malaria. You raise hundreds of millions of dollars to that end. What happens to your charity when you actually eradicate malaria? 

We're obviously not in that position with respect to malaria, unfortunately. But there are many problems for which charities are never incentivized to acknowledge that significant progress has been made, and the progress is such that it calls into question whether the charity should exist for much longer. I'm unaware of charities that are explicit about their aspiration to put themselves out of business because they're so effective (although there may be some).

Will: Yeah. I have a great example of this going wrong. One charity I know of is called ScotsCare. It was set up in the 17th century after the union of England and Scotland. There were many Scots who migrated to London, and we were the indigent in London, so it made sense for it to be founded as a nonprofit ensuring that poor Scots had a livelihood, a way to feed themselves, and so on. 

Is it the case in the 21st century that poor Scots in London are the biggest global problem? No, it's not. Nonetheless, ScotsCare continues to this day, over 300 years later. 

Are there examples of charities that explicitly would want to put themselves out of business? I mean, Giving What We Can, which you joined, is one. Our ideal scenario is a situation where the idea that you would join a community because you're donating 10% is just weird. If you become vegetarian, it’s very rare that you join a vegetarian society. Or if you decide not to be a racist or a liar, it’s not like you join the “no liars society” or the “no racists society.” 

That is what we're aiming for: a world where it's utterly common-sense that if you're born into a rich country you should use a significant proportion of your resources to try and help other people, impartially considered. [In fact, it’s so common-sense that] the idea of needing to be part of a community or club [devoted to making a donation pledge] wouldn't even cross your mind. The day that Giving What We Can is not needed will be a very happy day, from my perspective.

Misconceptions about EA

Sam: Let's talk about any misconceptions that people might have about effective altruism, because the truth is I've had some myself when preparing to have conversations with you and your colleague, Toby Ord (he has also been on the podcast). 

My first notion of effective altruism was very much inspired by Peter Singer's “shallow pond” story, in that it really was, almost by definition, just a matter of focusing on the poorest of the poor in the developing world. That's the long and the short of it. You're giving as much as you possibly can sacrifice, but the minimum bar would be 10% of your income. What doesn't that capture about effective altruism?

Will: Thanks for bringing that up, because it is a challenge we face. The ideas that have spread are the most mimetic, and not necessarily those that most accurately capture where the effective altruism movement is, especially today. 

As you say, many people think that effective altruism is just about earning as much money as possible to give to GiveWell-recommended global health and development charities. But I think there are at least three ways in which that misconstrues things:

  1. There are a wide variety of causes that we're focused on now. In fact, among the most engaged people in effective altruism, the biggest focus now is on making sure that things go well for the very many future generations to come, such as by focusing on the existential risks that Toby Ord talks about, such as man-made pandemics and AI. Animal welfare is another cause area. It's by no means the main focus, but is a significant minority focus. There are just a lot of people trying to get better evidence and understanding of these and a variety of other issues, too. Voting reform is something that I have funded and championed to an extent. And I’d be really interested in more people working on the risk of war over the coming century.
  2. The large majority of people within the effective altruism community are trying to make a difference not primarily via their donations — although often they donate, too — but through their career choice. They work in areas like research, policy, and activism.
  3. We really don't think of effective altruism as a set of recommendations, but rather like a research project methodology. It's more like aspiring towards the scientific revolution than any particular theory. What we're really trying to do is do for the pursuit of good what the scientific revolution did for the pursuit of truth. It's an ambitious goal, but we’re trying to make the pursuit of good a more rigorous and scientific enterprise, and for that reason we don't see the movement as a set of claims, but rather as a living, breathing, and evolving set of ideas.

Sam: Yeah. I think it's useful to distinguish at least two levels here. One is the specific question of whether an individual cause or charity is a good one. By what metric would you even make that judgment, and how do we rank-order our priorities? All of that is getting into the weeds of what we should do with our resources. And obviously that has to be done, and I think the jury is very much out on many of those questions. We'll get into those details [later in this podcast]. 

The problem with moral intuitions

But the profound effect that your work has had on me thus far [involves] this other level, which is simply the stark recognition that I want to do good in the world by default, and I want to engineer my life such that that happens whether I'm inspired or not.

The crucial distinction for me has been to see that there's the good feeling we get from philanthropy and doing good, and then there are the actual results in the world. Those two things are only loosely coupled. One of the worst things about us that we need to navigate around as we live our lives, or at least be aware of, is that we human beings tend not to be the most disturbed by the most harmful things we do, and we tend not to be the most gratified by the most beneficial things we do. We tend not to be the most frightened by the most dangerous risks we run. 

We're very easily distracted by good stories and other bright, shiny objects, and the framing of a problem radically changes our perception of it. 

The effect when you came on my podcast four years ago was for me to realize that of GiveWell’s most effective charities, the Against Malaria Foundation is among the top. I recognize in myself that I'm just not very excited about malaria or bednets. The problem isn't the sexiest for me. The remedy isn't the sexiest for me. Yet I rationally understand that if I want to save human lives, this is, dollar for dollar, the cheapest way to do so. The epiphany for me was that I just want to automate this. You can just give every month to this charity without having to think about it. 

That is gratifying to me to some degree, but the truth is I almost never think about malaria, the Against Malaria Foundation, or anything related to this project. But I'm doing good anyway, because I just decided to not rely on my moral intuitions day to day. I just decided to automate it. I recognized that there's a difference between committing in a way such that you no longer have to keep being your better self on [a particular topic like malaria] every day of the week. It's just wiser and more effective to decide in your clearest moment of deliberation what you want to do, and then just build the structure to actually do that thing. 

That's just one of several distinctions that you have brought into my understanding of how to do good.

Will: Yeah, absolutely. We must recognize that we are these fallible, imperfect creatures. Donating is much like paying your pension or something. You might think, “Oh, I really ought to do that, but it's just hard to get motivated.” We need to exploit our own irrationality, and I think that comes in two stages. 

The first is building up the initial motivation so that you can sustain it. Perhaps you feel moral outrage, or just real yearning to start to do something. In my own case, when I was deciding how much I should try to commit to giving away over the course of my life, I looked up images of children suffering from horrific tropical diseases. That really stayed with me and gave me the initial motivation. 

I still get that [feeling] if I read about the many close calls we had — for example, times when we’ve almost had a nuclear holocaust over the course of the 20th century, or what the world would have been like if the Nazis had won the Second World War and created a global totalitarian state. I was recently reading 1984, and again, just thinking about how bad and different the world could be can really create a sense of urgency. Or [consider] the news, and the moral outrages we see all of the time. 

The second stage is how we direct our motivation. In your own case, you say, “Every time I produce a podcast, I donate $3,500, and it saves a life.” That’s a good approach. Similarly, you can have a system where every time a paycheck comes in, 10% of it doesn't even enter your bank account. It immediately goes to an effective charity that you've carefully thought about. 

There are other hacks, too. Public commitments are really big right now. There's no way I'm backing out of my altruism now. Too much of my identity is wrapped up in it. Even if someone offered me a million pounds, and I could skip town, I wouldn't want to do it. It's part of who I am. It's part of my social relationships, and that's very powerful too.

San Francisco vs. Bangladesh

Sam: Actually, I want to push back a little bit on how you are personally approaching giving, because I think I have some rival intuitions here. I want to see how they survive contact with your sense of how you should live. 

When we think of causes that meet the test of effective altruism, they seem to be weighted toward some obvious extremes. For example, when you look at the value of a marginal dollar in Sub-Saharan Africa or Bangladesh, you get so much more of a lift in human well-being for your money than you seem to get in a place like the United States or the UK, that by default you generally have an argument for doing good elsewhere rather than locally. 

But I'm wondering if this breaks down for a few reasons. I might just take an example, like the problem of homelessness in San Francisco right now — leaving aside the fact that we don't seem to know what to do about homelessness. It appears to be a very hard problem to solve; you can't just build shelters for the mentally ill and substance abusers and call it a day, because people quickly find that they don't want to be in those shelters, and they're back out on the streets. And so you have to figure out what services you're going to provide. There are all kinds of bad incentives and moral hazards that, when you're the one city that does it well, then you're the city that's attracting the world's homeless.

But let's just assume for the sake of argument that we knew how to spend money so that we could solve this problem. Would solving the problem of homelessness in San Francisco stand a chance of rising to the near the top of our priorities in your view?

Will: Yes. It would all depend on how the cost to save homelessness compared with other opportunities. In general, it's going to be the case that the very best opportunities to improve lives are going to be in the poorest countries, because the very best ways of helping others have not yet been [applied there]. Malaria was wiped out in the US by the early 20th century; it's an easy and fairly cheap problem to solve. 

When we look at rich countries, the problems that are still left are the competitively harder ones to solve (for whatever reason). In the case of homelessness — and I'm not sure about the original source of this fact — I have been told that for those who haven't ever lived in the Bay Area, the problem of homelessness is horrific there. There are people with severe mental health issues and clear substance abuse issues everywhere on the streets. It's so prevalent. It just amazes me that one of the richest countries in the world, and one of the richest places within that country, is unable to solve this problem. 

But I believe that, at least that in terms of funding at the local level, there's about $50,000 spent per homeless person in the Bay Area. What this suggests is that the problem is not to do with a lack of finances. Perhaps [this is due to] some perverse incentives effect, perhaps it's government bureaucracy, perhaps it’s some piece of legislation. I don't know — it's not an issue that I know enough. But I can safely believe that because the US and the San Francisco Bay Area are so rich, if this was something where we could turn money into a solution to the problem, it would probably have happened already. 

That's not to say we'll never find issues in rich countries where you can do an enormous amount of good. At Open Philanthropy, which is a core effective altruist foundation, one program area is criminal justice. I believe Open Philanthropy started [researching that program area] about five years ago. They found that funding changes to legislation that could end the absurd rates of over-incarceration in the US could be beneficial to Americans. (For context, the US incarcerates five times as many people as the UK does, on a per-person basis.) There's a lot of evidence suggesting you could reduce that rate significantly without changing rates of crime. It seemed to be comparable with the best interventions in the poorest countries. 

Of course, this issue has now received more focus. I believe that they're finding it harder to make a difference by funding organizations that wouldn't have otherwise been funded. But this is at least one example of an opportunity that, for whatever reason, has not yet been funded. You can do as much good. It's just that I think those are, competitively speaking, much harder to fund.

Sam: I think that this gets complicated for me when you look at targeting a reduction in suffering. It is very easy to count dead people. If we're just talking about saving lives, that's a pretty easy thing to calculate. If we can save more lives in Country X over Country Y, then that seems like it's a net good to be spending our dollars in Country X. 

But when you think about human suffering — and when you think about how so much of it is comparative — the despair of being someone who has fallen through the cracks in a city like San Francisco could well be much worse. I don't know what data we have on this. But there's certainly a fair amount of anecdotal testimony that, while it's obviously terrible to be poor in Bangladesh and there are many reasons to want to solve that problem, by comparison, homeless people on the streets of San Francisco [suffer more]. 

They're not nearly as poor as the poorest people in Bangladesh, of course. Nor are they politically oppressed in the same way; by global standards, they're barely oppressed at all. But it wouldn't surprise me, if we could do a complete psychological evaluation or just trade places with people in each condition, we would discover that the suffering of a person who is living in one of the richest cities in the world, and is homeless, drug-addicted and mentally ill (to pick from that menu of despair), is actually experiencing the worst suffering on earth. (Again, we have to stipulate that we would need to be able to solve this problem, dollar for dollar, in a way that we admit that we don't know about at the moment.) 

It seems like simply tracking the GDP in each place, the amount of money it would take to deliver a meal or get someone clothing or shelter, and the power of the marginal dollar calculation doesn't necessarily capture the deeper facts of the case. Or at least that's my concern.

Will: I'd actually agree with you [when considering the case of] someone who is mentally unwell, has drug addictions, and is homeless in the San Francisco Bay Area. How bad is their typical day [relative to] someone living in extreme poverty in India, or Sub-Saharan Africa? I wouldn't want to make a claim that a homeless person in the US has a better life than the extremely poor. I think it's not so hard to hit rock-bottom in terms of human suffering. I do think that the homeless in the Bay Area seem to have really terrible lives. 

The question, in terms of the difference of how promising it is as a cause, is much more to do with whether the low-hanging fruit has already been taken.

Just think about the sickest you've ever been and how horrible that was. And now think about having malaria for months, and that you could have avoided it for a few dollars. That's an incredible fact. And that's where the real difference is, I think: in the cost to solve a problem rather than necessarily in the per-person suffering. Because while rich countries are, in general, happier than poor countries, the lives of the worst-off people — especially in the US, which has such a high variance in life outcomes — can easily be much the same [as those of people who suffer in poor countries].

Sam: Yeah. There are some other concerns that I have. One speaks to a deeper problem with consequentialism, which is our orientation here. People can mean many things by that term. But there's a problem in how you keep score, because obviously there are bad things that can happen which have massive silver linings — i.e., good consequences in the end. And there are good things that happen which actually have bad consequences elsewhere or in the fullness of time. It's hard to know when you can assess what is true, the net [outcomes], or how you get to the bottom line of the consequences of any actions. 

For example, [consider] the knock-on effects of letting a place like San Francisco become, effectively, a slum. Think of the exodus in tech from California at this moment. I don't know how deep or sustained it'll be, but I've lost count of the number of people in Silicon Valley who I've heard are leaving. And the homelessness in San Francisco is very high on the list of reasons why. That strikes me as a bad outcome that has far-reaching significance for society. 

Again, this kind of thing is not captured by just counting bodies or looking at how cheap it is to buy bednets. And I'm struggling to find a way of framing this that is fundamentally different from Singer's shallow pond story, that allows for some of the moral intuitions that I think many people have here, one of which is that there's an intrinsic good in having a civilization that is producing the most abundance possible. We want a highly technological, creative, beautiful civilization. We want gleaming cities with beautiful architecture. We want institutions that are massively well-funded producing cures for diseases, rather than just things like bednets. We want beautiful art. 

There are things that we were right to want, and that are only compatible with the accumulation of wealth in certain respects. From Singer's framing, those intuitions are just wrong, or at least they're premature. And on some level, we have to save the last child in the last pond before we can think about funding the Metropolitan Museum of Art. 

Many people are allergic to that intuition for reasons that I understand, and I'm not sure that I can defeat Singer's argument. But I have this image: we have a lifeboat problem. You and I are in the boat. We're safe. The question is: How many people can we pull into the boat and save as well? As with any lifeboat, there's a problem of capacity. We can't save everyone all at once. But we can save many more people than we've saved thus far. 

But the thing is, we have a fancy lifeboat. Civilization itself is a fancy lifeboat. There are obviously people drowning, and we're saving some of them. And you and I are now arguing that we can save many more — and we should save many more. Anyone listening to us is lucky to be here safely in this lifeboat with us. And the boat is not as crowded as it might be, but we do have finite resources.

The truth is, because it's a fancy lifeboat, we are spending some of those resources on things other than reaching over the side and pulling in the next drowning person. There's a bar that serves very good drinks, and we've got a good internet connection so that we can stream movies. While this may seem perverse, again, if you extrapolate from here, you realize that we’re talking about civilization, which is a fancy lifeboat. And there's obviously an argument for spending a lot of time and money saving people and pulling them in. But I think there's also an argument for making the lifeboat better and better, so that we have more smart, creative people incentivized to spend some time at the edge, pulling people in, with better tools — tools that they only could have made had they spent time elsewhere in the boat. 

This moves to the larger topic of just how we envision building a good society, even while there are moral emergencies right now, somewhere, that we need to figure out how to respond to.

Will: Yeah. This is a crucially important set of questions. The focus on knock-on effects is important. Again, let's just take the example of saving a life. You don't just save a life, because that person goes on and does stuff. Have they made the country richer? Perhaps they have kids. Perhaps they will emit CO2 (that's a negative consequence). They'll innovate or invent things, or maybe [create or look at] art. There's this huge stream of consequences from now until the end of time. And it's quite possible that the knock-on effects, while much harder to predict, have much bigger effects than the short-term benefits of saving the person’s life.

In the case of homelessness in the Bay Area versus extreme poverty in a poor country, I'd want to say that if we're looking at the knock-on effects of one, we want to do the same for the other. One thing I worry about over the course of the coming years is the possibility of a war between India and Pakistan. But it's a fact that rich democratic countries seem to not go to war with each other. So one knock-on effect of saving lives or helping development in India is perhaps that we get to the point where India is rich enough to not want to go to war, because the cost benefit doesn't pay out in the same way. That would be another potentially good knock-on effect. 

That's not to say that the knock-on effects favor an extreme-poverty intervention compared to a homelessness intervention. It's just that there are so many of them. It's very hard to understand how these play out. 

You also mentioned that we want to achieve great things. We want to achieve the highest apogees of art and of development. Personally, I'm sad that I will never get to see the point in time when we truly understand science and have figured out its fundamental laws (especially its fundamental physical laws). But [I’m also sad that I will miss] the great experiences and peaks of happiness that make the very greatest achievements of the present day — today’s very greatest peaks of joy and ecstasy — seem insignificant in comparison. That’s something that I do think is important. 

I think that once you [decide] to take these knock-on effects seriously, that's the sort of reasoning that leads you to start thinking about what I call longtermism, which is the idea that the most important aspect of our actions is the impact we have over the very long run. Longtermism makes us want to prioritize things like ensuring that we don't have some truly massive catastrophe as a result of a nuclear war or a man-made pandemic that could derail this process of continued economic and technological growth that we seem to be undergoing. 

Or, longermism could make us want to avoid certain bad-value states like the lock-in of a global totalitarian regime, which is another thing that I'm particularly worried about in terms of the future of humanity. Or perhaps it is just that we're worried the technological and economic growth will slow down, and what we want to do is spur continued innovation into the future. I think there are really good arguments for that. 

But if that’s your aim, I would be surprised if the best route is via focusing on homelessness in the Bay Area, rather than aim at those ends more directly.

Sam: Okay. I think we're going to return to this concept of the fancy life boat at some point. I do want to talk about your personal implementation of effective altruism in a subsequent lesson, but for the moment let's get into the details of how we think about choosing a cause. 

So how do we? I've had my own adventures and misadventures with this since I took your pledge. Before we get into the specifics, I want to point out that it has had a really wonderful effect on my psychology, which is this: I think I've always been, by real world standards, fairly charitable; giving to organizations that inspire me, or that I think are doing good work, is not a foreign experience for me. But since connecting with you — and taking the pledge — I'm now aggressively charitable. 

This has created a feeling of pure pleasure. There's a kind of virtuous greed that is kindled when you help others. And rather than seeing giving as an obligation, it really feels like an opportunity. You want to run into that building and save the girl at the window. But across the street there's a boy at the window, and you want to run in there, too. So, this is actually a basis for psychological well-being. It makes me happy to put my attention in this direction. It's the antithesis of feeling like an onerous obligation.

Anyway, I'm increasingly sensitive to causes that catch my eye and that I want to support, but I'm aware that I am a malfunctioning robot with respect to my own moral compass. As I said, I know that I'm not as excited as I should be about bednets that stave off malaria. I'm giving to that cause nonetheless, because I recognize that the analysis is almost certainly sound.

For me, what's interesting here is that when I think about giving to a cause that really doesn't quite pass the test [in terms of being a highly effective charity], then that achieves the status for me of a guilty pleasure. I feel a little guilty that I gave that much money to a homeless charity, because Will just told me that that's not going to pass the test.

So, that donation must be above and beyond the 10% I pledged to the most effective charities. Having to differentiate between charitable donations that meet the test and those that don't is an interesting project, psychologically. I don't know. It's just very different territory than I've ever been in with respect to philanthropy.

Funding new organizations

This raises the issue of charities that are newly formed, and therefore don’t yet have a long track record. I happen to know some of the people who created [this particular charity focused on homelessness]. How can you fund a new organization, with all of these other established organizations that have track records that you can assess competing for your attention?

Will: The first thing I want to address is the question of whether this counts towards the pledge [of donating 10%]? I definitely want to disabuse people of the notion of that we think of ourselves as the authority on what is effective. These are our best guesses.

GiveWell and other organizations have put enormous amounts of research into this, but they're still estimates. There are plenty of things you can disagree with. It's actually quite exciting to have someone come in and start disagreeing with us, because maybe we're wrong, and that's great. We can change our minds and form better beliefs.

The second thing I’d like to say is that early-stage charities absolutely can compete with charities with a more established track record. It’s similar to how you might think about financial investments. Investing in bonds or the stock market is a way of making a return, but so is investing in startups. And if you had the view that you should never invest in startups, then that would definitely be a mistake. 

Actually, quite a significant proportion of GiveWell's expenditure each year is on early-stage nonprofits that have the potential in the future to become top recommended charities. So, a set of questions that I would ask about any organization I'm looking at is:

There are some things that we know do enormous amounts of good, and have an enormous amount of evidence supporting them. And so, we want to focus on things where either there's very promising evidence and we could potentially do more, or it’s the nature of [the intervention or cause] that we cannot get very high-quality evidence, but we have good, compelling arguments for thinking that this might be super important.

For example, [there are strong reasons to consider funding] clean energy innovation, new developments in carbon capture and storage, or nuclear power. It's not like you can do a randomized controlled trial for that, but I think there are good theoretical arguments for believing that might be an extremely good way of combating climate change. 

It's worth bearing in mind that saying something is the very best thing you can do with your money is an extremely high bar. If there are tens of thousands of possible organizations, there can only be one or two that have the biggest bang for the buck.

Giving as a guilty pleasure

Sam: All right. Well, it sounds like I'm opening a “guilty pleasures fund” to run alongside the “Waking Up Foundation.”

Will: I'm very glad that you think of them as pleasures. It’s a good instinct to find out about bad problems in the world and feel motivated to want to help solve them. I don't think you should be beating yourself up, even if it doesn't seem like the very most optimal way to donate.

Sam: No, I'm not. In fact, I have an even guiltier pleasure to report. It was not through a charity; it was just a personal gift. This does connect back to the kind of lives we want to live, and how that informs this whole conversation.

I was listening to the New York Times Daily podcast. This was when the COVID pandemic was really peaking in the US, and everything seemed to be in free fall. They profiled a couple who had a restaurant, I think it was in New Orleans, and they had an autistic child. Everyone knows that restaurants were among the first businesses crushed by the pandemic, for obvious reasons, and it was just a very affecting portrait of this family trying to figure out how they were going to survive, and get their child the help she — I think it was a girl — needed.

It was a “little girl fell down the well” sort of story, compared to the genocide that no one can pay attention to because genocides are just boring. I was completely aware of the dynamics of this: helping these people could not survive comparison with simply buying yet more bednets. Yet, the truth is I really wanted to help these people, so I just sent them money out of the blue. 

There are two things that arise in defense of this kind of behavior. It feels like an orientation that I want to support in myself, because it does seem like a truly virtuous source of mental pleasure. I mean, it's better than almost anything else I do when I spend money selfishly. And psychologically, it's both born of a felt connection and it ramifies that connection. There's something about honoring that bug in my moral hardware, rather than merely avoiding it, that seems like it's leading to greater happiness — whether it’s by generally helping people in the most effective ways, or by helping people in middling effective ways.

Feeling what I felt doing that is part of why I'm talking to you now and trying to truly get my philanthropic house in order. So, it seems all of a piece here. And I do think we need to figure out how to leverage the salience of connection to other people, and the pleasure of doing good. If we lose sight of that, if we just keep saying that you can spend $2,000 here, which is better than spending $3,000.00 over there — completely disregarding the experience people are having when they engage with the suffering of others — I feel like something is lost.

Another variable [I’ll mention is that while] this wasn't an example of a local problem I was helping to solve, had it been one, and had I been offered the opportunity to help my neighbor at greater-than-rational expense, that might have been the right thing to do. Again, it's falling into the “guilty pleasure” bin here, compared to the absolutely optimized, most effective way of relieving suffering. 

I feel like there's something lost if we're not ever in a position to honor a variable like locality. We're not only building or affecting the world. We're building our own minds. We're building the very basis by which we would continue to do good in the world in the coming days, weeks, months and years.

Will: Yeah. Essentially, I completely agree with you. And think it's really good that you supported that family. It reminds me of my own case and something that has stayed with me. I lived in Oakland, California for a while in a very poor, predominantly Black neighborhood. I was out on a run, and a woman came up to me and asked if I could stop and help for a second. I thought she was going to want help carrying groceries or something. 

It turned out that she wanted me to move her couch all the way down the street. It took two hours, and that was out of my working day. I don't regret the use of that time at all, and why is that? I think it's because most of the time we're not facing the bigger questions that moral philosophy has typically focused on, like what career to pursue. We also face the question of what kind of person to be. What motivations and disposition do I want to have? 

I think the idea of me becoming this utility-maximizing robot that is utterly cold and calculating all the time, is certainly not possible for me, given that I am an embodied human being. But it’s also probably not desirable, either. I don't think that the effective altruism movement would have started had we all been these cold, utility-maximizing robots.

So, I think cultivating a personality such that you do get joy, reward, and motivation from being able to help people and get that feedback — and make that a part of what you do — can be the best way of living, when you consider your life as a whole. 

Doing those things does not necessarily trade off very much at all. It can perhaps even help [foster] the other things that you do. So, in your case, you get a psychological reward from supporting this poverty-stricken family with a disabled child, or get a reward from helping people in your local community. I presume you can channel that. It can help maintain the motivation to do things that might seem much more alien, or just harder to empathize with. And I think that's okay. I think we should accept and, in fact, encourage that.

I think it's very important that once we take these ideas outside of the philosophy seminar room and actually try to live them, we appreciate the instrumental benefits of doing these kinds of everyday actions. As long as it ultimately helps you stand by your commitment, at least in part, to try to do what we rationally think (all things considered) is going to be best for the world.

Sam: Yeah. You mentioned the variable of time here. This is another misconception about effective altruism — that it's only a matter of giving money to the most effective causes. You spend a lot of time thinking about how to prioritize one's time and think about doing good over the course of one's life based on how one spends one's time. So, in our next chapter, let's talk about how a person could think about having a career that helps the world.

Okay, we're going to speak more about the question of giving to various causes, and how to do good in the world in terms of sharing the specific resource of money. But we're not talking about one's time. How do you think about time versus money? I know you've done a lot of work on the topic of how people can think about having rewarding careers that are net positive; and you have a website, 80,000 Hours, that you might want to point people to. 

Helping others with time instead of money

So, let's talk about the variable of time, and how people can spend it to the benefit of others.

Will: Great. The organization is called 80,000 Hours, because that's the typical number of hours that you work over the course of your life (approximately a 40-year career, working 40 hours a week, 50 weeks a year). We use that to illustrate the fact that your choice of career is probably, altruistically speaking, the biggest decision you’ll ever make. It's absolutely enormous, yet people spend very little of their time really thinking through that question. 

If you go out for dinner, you might spend 1% of the time that you would spend at dinner thinking about where to eat — maybe a few minutes. But spending 1% of 80,000 hours on your career decision — on what you should do — would be 800 hours. That’s an enormous amount of time.

But why did I do philosophy? I liked it at school. I could have done maths, but my dad did maths and I wanted to differentiate myself from him. I didn't have a very good reasoning process at all, because we generally don't pay nearly enough attention to that. 

Certainly, when it comes to doing good, you have an enormous opportunity to have a huge impact through your career. And so, what 80,000 Hours does via its website, podcasts, and a small amount of one-on-one advising, is to try to help people figure out which careers will allow them to have the biggest impact.

The question of which charities to donate to is exceptionally hard. But this is an even harder question, because, first of all, you'll probably work at many different organizations over the course of your life, not just one. Second, there’s the question of personal fit. Some people are good at some things and not others. It's a truism. So, how should you think about this?

The most important question, I think, is the question of which cause to focus on, and that involves big-picture, worldview judgments, as well as philosophical questions. (And by “cause,” I mean a big problem in the world like climate change, gender inequality, poverty, factory farming, the possibility of pandemics, or AI lock-in of values.)

We look at those causes in terms of:

Those heuristics explain why effective altruism has chosen the focus areas it has. Those areas include pandemic preparedness, artificial intelligence, climate change, poverty, farm animal welfare, and potentially some others as well, like improving institutional decision-making and some areas of scientific research.

So, that's by far the biggest question, I think, because that really shapes the entire direction of your career — and, depending on the philosophical assumptions you make, can result in enormous differences in impact. For example, do you think animals count at all, or a lot? That would make an enormous difference in terms of whether you ought to be focusing on animal welfare. Similarly, what weight do you give to future generations versus present generations? Potentially, you can do hundreds of times as much good in one cause area as you can in another.

And then, within the question of where to focus, much will depend on the particular cause area. Different causes have different bottlenecks. We tend to find that at the best nonprofits there's often great research being done. This is especially true in more nascent causes like the safe development of artificial intelligence or pandemic preparedness (where more research needs to be done). Policy is often a very good thing to focus on, as well. And, in some areas, especially where money is the real bottleneck, then planning to do good primarily through your donations (and therefore planning to take a job that's more lucrative) can be the way to go.

Sam: Yeah. That's a wrinkle that is kind of counterintuitive to people: the idea that the best way for you to contribute might, in fact, be to pursue a lucrative career that you might be especially well-placed to pursue. And it may have no obvious connection to doing good in the world, apart from the fact that you're now giving a lot of your resources to the most effective charities.

So, if you're a rock star, or a professional soccer player, or just doing something that you love to do, and have other reasons for doing it — but you're also making a lot of money that you can then give to great organizations — it's hard to argue that your time would be better spent working in the nonprofit sector yourself, or doing something where you wouldn't be laying claim to those kinds of resources.

Will: Yeah, that's right. So, a minority of people within the effective altruism community are now trying to do good in their career via the path of what's called “earning to give.” And again, it depends a lot on the cause area. How much money is there relative to the size of the cause already? 

In the case of things like scientific research, AI, or pandemic preparedness, there's clearly a lot more demand for altruistically-minded, sensible, competent people working in these fields than there is money. However, in the case of global health development, there are interventions and programs that we know work very well, and that we could scale up with hundreds of millions, or even billions, of dollars. And there, money is more of the bottleneck. 

So, going back to these misconceptions about effective altruism, this idea of earning to give is fairly mimetic. People love how counterintuitive it is, and it is one of the things we believe. But it's a path that only a minority of people in the movement take, and [their decision is] more about how many people are already working on these causes.

Incentivizing talented people to do nonprofit work

Sam: This raises another point: The whole culture around charity is not optimized for attracting the greatest talent. We have a double standard here, which many people are aware of. I think it's most clearly [articulated] by Dan Pallotta. I don't know if you know him. He gave a TED Talk on this topic. He organized some bike rides across America in support of various causes; I think the main one was AIDS. He might have organized one for cancer as well. 

These are ventures that raised, I think, hundreds of millions of dollars, and I think he was criticized for spending too much on overhead. But it's a choice where you can spend less than 5% on overhead and raise $10 million, or you could spend 30% on overhead and raise $400 million. What should you do?

It's pretty obvious that you should do the latter if you're going to use those resources well. And yet there's a culture that prioritizes having the lowest possible overhead. Also, there's this sense that if you're going to make millions of dollars personally by starting a software company, or by becoming an actor in Hollywood or whatever, there's nothing wrong with that. But if you're making millions of dollars a year running a charity, well then, you're a greedy bastard. We wouldn't fault someone for pursuing a comparatively frivolous, and even narcissistic, career for getting rich in the meantime; but we would fault someone who's trying to cure cancer or save the most vulnerable people on earth for getting rich while doing that. 

That seems like a bizarre double standard, with respect to how we want to incentivize people. Because what we're really demanding is that someone come out of the most competitive school, and when faced with the choice of whether or not to work for a hedge fund or work for a charity doing good in the world, they have to also be someone who doesn't care about earning much money. So, we're sort of filtering for sainthood, or something like sainthood, among the most competent students at that stage, and that seems less than optimal. I don't know how you view that.

Will: Yeah. I think it's a real shame. Newspapers, every year, publish rankings of the top-paid charity CEOs, and it's regarded as kind of a scandal. The charity is therefore deemed ineffective. But what we should really care about, if we actually care about the potential beneficiaries — the people we're trying to help — is how much money we’re giving this organization [relative to] how much good comes out at the other end.

If it's the case that they can achieve more because they can attract a more experienced and able person to lead the organization by paying more, then we should encourage the charity to do that. Maybe that’s a sad fact about the world; it would be nice if everyone were able to be maximally motivated purely by altruism, but we know that's not the case. 

Some argue that this could create a “race to the bottom” type of dynamic, where if one organization starts paying more, then other organizations will need to pay more too, creating bloat in the system. I think that's the strongest case for the idea of low overhead when it comes to fundraising. If one organization is fundraising, perhaps, in part, they're increasing the total amount of charitable giving that happens, but they're also probably taking money away from other organizations. And so, it can be the case that when it comes to fundraising, a general norm of lower overheads is a good one.

But when it comes to charity pay, we're obviously radically far away from that. It also shows that people are thinking about charity in a fundamentally wrong way, at least [from the perspective of] an effective altruist; many people aren’t thinking about charity in terms of outcomes, but in terms of the virtues you demonstrate, or how much you sacrifice or something.

Ultimately, when it comes to these problems that we're facing — these terrible injustices, this horrific suffering — I don't really care whether the person who helps is virtuous or not. I just want the suffering to stop. I just want people to be helped. And as long as [the people who are helping] don’t do harm along the way, I don't think it really matters whether they’re paid a lot, or a little.

Sam: I think we should say something about the other side of this equation, which tends to get emphasized in most people's thinking about being good: the consumer-facing side. [This side is about] not contributing to obvious harms in a way that is egregious, or dialing down one's complicity in the unacceptable status quo as much as possible. It includes things like becoming a vegetarian or a vegan, or avoiding certain kinds of consumerism based on concern about climate change.

How effective is ethical consumerism?

There's a long list of causes that people get committed to, more in the spirit of negating certain bad or polluting behavior, rather than focusing on what they're in fact doing to solve problems, or in fact giving to specific organizations. Is there any general lesson to be drawn from the results of these efforts on both fronts? How much does harm avoidance, as a consumer, add to the scale of merit here? What's the longest lever we can pull, personally?

Will: I think there are a few things to say. Right at the start [of this conversation], I mentioned one of the key insights of effective altruism is this idea that different activities can vary by a factor of 100 or 1,000, in terms of how much impact they have. Even within ethical consumerism, I think that happens. 

So, if you want to cut out most animal suffering from your diet, I think you should cut out eggs, chicken, and pigs. Maybe fish. Whereas beef and milk, I think, are comparatively small factors. If you want to reduce your carbon footprint, then giving up beef and lamb, reducing transatlantic flights, and reducing how much you drive make significant differences. Those actions have dozens of times as much impact as things like recycling, upgrading light bulbs, or reusing plastic bags.

From a purely consequentialist, outcome-based perspective, I think it is systematically the case that these ethical consumer behaviors are small in terms of their impact, compared to the impact that you can have via your donations or your career. The reason is that there's just a very limited range of things that you can do by changing your consumption behavior. There are things you are buying anyway. Whereas, if you're donating or you're choosing a career, then you can choose the very most effective things to be doing.

Take the case of being vegetarian. I've been a vegetarian for 15 years now. I have no plans of stopping that. But if I think about how many animals I'm helping in the course of a year as a result of being vegetarian, it doesn’t compare to the effectiveness of the very most effective animal welfare charities (which typically involve what are called “corporate campaigns”). It turns out that the most effective way that we know of to reduce the number of hens in factory farms, laying eggs in just the most atrocious, terrible conditions of suffering, seems to be via campaigning large retailers to change the eggs they purchase in the supply chain. You can actually get a lot of push there, and the figures are just astonishing. It's something like 50 animals that you're preventing the significant torture of for every dollar that you're spending on these campaigns. 

So, if you just do the maths, the amount of good you do by becoming vegetarian is equivalent to the amount of good you do by donating a few dollars to the very most effective campaigns.

I think [the outcome is] similar for reducing your carbon footprint. My current favorite climate change charity, Clean Air Task Force — which lobbies the US government to improve its regulations around fossil fuels, and promotes energy innovation as well — probably reduces one ton of CO2 for about a dollar. An average US citizen emits about 16 tons of carbon dioxide equivalent. If you did all of the most effective things, like cutting out meat and all of your transatlantic flights, getting rid of your car, and so on, you might be able to reduce that to six tons. And that's the same as giving about $6 to these most effective charities. So, it just does seem that [donating to certain charities is] much more powerful, in terms of outcomes.

The next question, philosophically, is whether you have some non-consequentialist reason to do these things. There, I think, it differs. I think the case is much stronger for becoming vegetarian than for climate change, because if I buy factory-farmed chicken and then donate to a corporate campaign, I probably harmed different chickens, and you can't offset the harm to one individual by a benefit to another individual. But if I have a lifetime of emissions, but at the same time donate a sufficient amount to climate change charities, I've probably just reduced the total amount of CO2 going into the atmosphere over the course of my lifetime. And there isn't anyone who's harmed, in expectation at least, by the entire course of my life. It's not like I'm trading a harm to one person for a benefit to another. 

But these are quite subtle issues, when we get onto these non-consequentialist reasons.

Technological innovation as an EA cause area

Sam: Yeah. There are also ways in which the business community, and innovation in general, can come to the rescue here. For instance, there's a company called Memphis Meats (I believe the name is going to be changed) that is spearheading a revolution in what's called “cultured meat” or “clean meat.” They take a single cell from an animal and amplify it so that no animals are killed in the process of making steaks, meatballs, or chicken cutlets. They're trying to bring this to scale. I had the CEO, Uma Valeti, on my podcast a few years ago, and actually invested in the company, along with many other people. And hopefully, this will bear fruit.

That's an example of something that was unthinkable some years ago. But we might suddenly find ourselves living in a world where you can buy steak, and hamburger meat, and pork and chicken, without harming any animals. It may also have other significant benefits, like cutting down on viruses, and that connects with the pandemic risk issue. Our factory farms are wet markets of another sort.

And so it is with climate change. On some level, we're waiting and expecting technology to come to the rescue here, such that we’re bringing down the cost of renewable energy to the point where there is literally no reason to be using fossil fuels, or [building] a new generation of nuclear reactors that don't have any of the downsides of the old ones. 

This connects to the concern I had around the fancy lifeboat. We have to do the necessary things in our lifeboat that allow for those kinds of breakthroughs, because those are, in many cases, the solutions that fundamentally take away the problem, rather than merely mitigate it.

Will: I totally agree. In the case of trying to alleviate animal suffering, I think that funding research into clean meats is plausibly the best thing you can do. It's hard to make the comparison with the more direct corporate campaigns, but it is plausibly the best.

In the case of climate change, I've recently been convinced that the most effective thing we can be doing is promote clean energy innovation. This is another example of importance versus neglectedness. You mentioned renewables, and they're a really key part of the solution. But other areas are notably more neglected. For example, carbon capture and storage, where you're capturing CO2 as it emerges from fossil fuel power plants, and nuclear power, get quite a small amount of funding compared to solar and wind, even though the Intergovernmental Panel on Climate Change thinks that they're also a very large part of the solution.

But here, I think, the distinction is between focusing on issues in rich countries (in order to benefit people in those rich countries) versus as a means to some other benefit. You might be donating money to support something in a rich country like the US, but not because you're trying to benefit people in the US. You do it because you're trying to benefit the world.

So maybe you’re funding a clean meat startup, or research on low-carbon forms of energy. And that research might happen in the US, which is still the world's research leader. That's very justified, even though the US partly benefits. But it’s also global, and it affects future generations, too. You're  influencing, as it were, the people who are in positions of power, who have the most influence over how things will go in the future.

The case for Giving What We Can

Sam: Okay. Next, let's talk about how we build effective altruism into our lives, and how to make this as personally actionable for people as we can. 

So, we've sketched out the basic framework of effective altruism and how we think about systematically evaluating various causes — how we think about priorities with respect to things like actual outcomes versus a good story. And we've referenced a few things that are now in the “effective altruism canon,” like giving a minimum of 10% of one's income a year. And if I'm not mistaken, you just [chose 10% because it’s] a nice, round number that people have some traditional associations with. In religious communities, there's a notion of tithing that amount. And it doesn’t seem so large as to be impossible to contemplate, but not so small as to be ineffectual. 

Maybe let's start there: Am I right in thinking that 10% was kind of pulled out of a hat, but seemed like a good starting point — and that there's nothing about it that's carved in stone, from your point of view?

Will: Exactly. It's not a magic number, but it's in a “Goldilocks zone” [it’s neither too big nor too small]. Toby Ord [one of effective altruism’s co-founders] originally had the thought that he would be promoting what he calls “the further pledge,” which is where you set a cap on your income and give everything above that. But it seems pretty clear that if you'd been promoting that, very few people would have joined him. We do have a number of people who've taken the further pledge, but it's a very small minority of the 5,000 members we have. 

On the other hand, if we were promoting a 1% pledge, we're probably not changing people's behavior compared to how much they donate anyway. In the UK, people donate, on average, 0.7% of their income. In the US, if you include educational donations and church donations, people donate about 2% of their income.

So if I was saying, “Oh, we should donate 1%,” probably those people would have been giving 1% anyway. Therefore, we thought 10% is in this Goldilocks zone. And as you say, it has a long history where, for religious reasons, people much poorer than us in earlier historical epochs have been able to donate 10%. We also have 10 fingers. It's a nice, round number. But you know, many people who are a part of the effective altruism community donate much more than that. And many people who are firm proponents don't donate that much; they do good in other ways instead.

Sam: It's interesting to consider the psychology of this, because I can imagine many people entertaining the prospect of giving 10% of their money away and thinking, “Well, I could easily do that if I were rich, but I can't do that now.” And I can imagine many rich people thinking, “Well, that's a lot of money! I'm making a lot of money, and you're telling me that year after year after year, I'm going to give 10% away. That's millions of dollars a year.”

So it could be that there's no point on the continuum of earning where, if you're of a certain frame of mind, it's going to seem like a Goldilocks value. You either feel too poor or too rich, and there's no sweet spot. 

Or, to flip that around, you can recognize that however much money you're making, you can always give 10% to the most effective ways of alleviating suffering. Once you have this epiphany, you can always find that 10%. And if you're not making much money, obviously 10% will be a small amount. And if you're making a lot of money, it'll be a large amount. But it's almost always the case that there's 10% of fat to be found. 

So, when you came up with that percentage, did you have thoughts about the psychology of someone who doesn’t feel immediately comfortable with the idea of making such a commitment?

Will: Yeah. I think there are two things I'd like to say to that person. One is a somewhat direct argument. The second is more pragmatic. 

The direct argument is this: Even if you feel like you could only donate that amount if you were rich, if you're listening to this, you probably are rich. If you're single and you earn $66,000 a year, then you're in the global 1% of the world in terms of income distribution. And what's more, even after donating 10% of your income, you would still be in the richest 1% of the world's population. If you earn $35,000, which we would not think of as being a rich person, even after donating 10%, you’d still be in the richest 5% of the world's population. Learning those facts was very motivating for me when I first started thinking about my giving.

The more pragmatic argument is to think that during most stages in your life, you will be earning more in the future than you are now. People's incomes tend to increase over time. You might just reflect and ask yourself, “How do I feel about money at the moment?” Perhaps you're in a situation where you're actually fairly worried — there are serious health issues or something. In that case, take care of that first. But if you're in a position where you don't think additional money will make that much of a difference, then what you can do is think, “Okay, maybe I'm not going to give up to 10% now, but I'll give a very significant proportion of the additional money I make — for example, any future raises.” Maybe you decide you’ll give 50% of that amount. And after that, you’ll probably still increase the amount you're earning over time. 

At the same time, if you do that, then in a few years, you'll probably soon end up giving 10% of your overall income. So at no point in this plan do you ever have to go backwards, as it were, living on less. In fact, you're always earning more, but yet you're giving more at the same time. And I've certainly found that in my own life, where I started thinking about giving as a graduate student, I now live on more than twice as much as I did when I first started giving. But I'm also able to give a significant amount of my income.

Sam: Remind me: How have you approached this personally? Because you haven't taken a minimum 10% pledge. You think of it differently. So what have you done over the years?

Will: Yeah. So I have taken the Giving What We Can pledge, which [entails giving] 10% at any point. I also made a plan to donate everything above the equivalent of £20,000 per year in 2009, which is now about £27,000 per year. I've never written this down as a formal pledge — the reason being that there were just too many possible exceptions — if I had kids, I’d want to decrease that, or if there were situations where I thought my ability to do good in the world would be severely hindered, I'd want to avoid that. But that is the amount that I'm giving at the moment, and it's the amount I plan to give for the rest of my life.

Sam: Just so I understand that: You're giving anything you make above ÂŁ27,000 a year to charity.

Will: Yeah, that's post-tax. My income is a bit complicated in terms of how you evaluate it because it includes my university income, as well as book sales and so on. And there are things like speaking engagements that I don’t take. So, I give a little over 50%.

Sam: Okay. So I want to explore that with you a little bit, because I'm returning to our fancy lifeboat and wondering just how fancy it can be in a way that's compatible with the project of doing the most good in the world. And what I detect in myself and in most of the people I meet — and I'm sure this is an intuition that is shared by many of our listeners — is that many people are reluctant to give up on the aspiration to be wealthy (with everything that that implies). 

Obviously, they want to work hard and make their money in a way that is good for the world, or at least benign. They can follow all of the ethical arguments that would say choosing the right livelihood, in some sense, is important. But if people really start to succeed in life, I think there's something that will strike many people, if not most, as too abstemious and monkish about the lifestyle you're advertising, in choosing to live on that amount of money and give away everything above it, or even give away 50% of one's income.

And again, I think this does connect with the question of effectiveness. It's at least possible that you would be more effective if you were wealthy, and living with all that entails. Take someone like Bill Gates. He is obviously the most extreme example I could find, because he's still one of the wealthiest people on earth. I think he's the second-wealthiest, perhaps. And it’s been well-established that he's probably the biggest benefactor of charity in human history. He has funded the Gates Foundation to, perhaps, the tune of tens of billions of dollars at this point. And I'm sure he has spent a ton of money on himself and his family. His life is probably filled to the brim with luxury, but his indulgence in luxury is still just a rounding error relative to the amount of money he's giving away.

It’s hard to run a counterfactual, but I'd be willing to bet that Gates would be less effective and less wealthy — and have less money to give away if he were living like a monk. And I think, maybe more importantly, his life would be less inspiring to many other wealthy people. If Bill Gates said, “Listen, I'm living on $50,000 a year and giving all my money away to charity,” that wouldn't have the same kind of kindling effect that I think his life, to this point, has had. You really can have your cake and eat it too. You can be a billionaire who lives in a massive smart house with all of the sexy technology, and even fly around on a private jet, and be the most charitable person in human history.

And think of the value of his time. If he were living a more abstemious life, just imagine Bill Gates spending an hour bargain hunting and trying to save $50 on a new toaster oven. It would be such a colossal waste of his time, given the value of his time. Again, I don't have any specifics around how to think about this counterfactual. But this [gets at] a point that you actually made in our first conversation, I believe: You don't want to be an antihero in any sense. If you can inspire only one other person to give at the level that you're giving, you have doubled the good you can do in the world.

So on some level, you want your life to be the most compelling advertisement for this whole project. And I'm just wondering what changes we would want to make to Bill Gates' life at this point to make him an even more inspiring advertisement for effective altruism to other very, very wealthy people. It might be dialing down certain things, but given how much good he's able to do, him buying a fancy car doesn't even register in terms of actual allocation of resources. 

Will: Yeah. Terrific. I think there are three different strands [of your thinking that] I'd like to pick apart. The first is whether everyone should be like me. I really don't want to make that claim. I certainly don't want to say, “Well, I can do this thing, so everyone else can,” because I'm in a position of such utter privilege. I was born into a middle-class family in a rich country privately educated at Cambridge, then Oxford. I’m tall and male and white. I have inexpensive tastes; my ideal day involves sitting on a couch, drinking tea, reading some interesting new research, and perhaps going swimming. Also, I have amazing benefits as a virtue of the work that I do. I meet incredibly varied, interesting people. 

So, I don't think I could stand here and say, “Well, everyone should do the same as me,” because I think I've had it so easy. If I think about the sacrifices I have made, or the things I’ve found hard over the course of 10 years, it has been things like being on the Sam Harris podcast, doing a TED Talk, or meeting wealthy, important people — things that might cause anxiety — versus financial sacrifices. But I recognize there are other people for whom money really matters. And I think that in part, you're born with a set of preferences, or perhaps they're molded early on in childhood, and you don't necessarily have control over them. I’m sort of an outlier.

Second, there’s the time value of money. This is something I've really wrestled with, because it is simply the case that in terms of my personal impact, my donations have played a very small part. We’ve been successful. Giving What We Can now moves $200 million this year, and over $1.5 billion of pledged donations. The effective altruism movement as a whole has over $10 billion of assets that will be distributed. And then I'm donating my thousands of pounds per year. My donation is clearly small on a [relative basis]. That's definitely something I've wrestled with.

I don't think I lose enormous amounts of time. My guess is that it's maybe a few days a year. For my work, I have an assistant. Business trips count as expenses; I keep my personal money separate. There are some things you can't do. For example, if you live close to your office, you can’t count that as a business expense, but it would shorten your commute. So it's not a perfect approach. And I do think there's an argument against it, and there’s definitely reason for caution around making a very large commitment. 

And then the final aspect to consider is what sort of message you want to send. My guess is that you’d want a bit of market segmentation, where some people could perhaps show what can be done [on a small income]. Others can show that actually, you can have this amazing life without having to wear a hair shirt, and so on. Perhaps you could convince me that I'm sending the wrong message and would do more good if I’d taken another pledge. Maybe you would be right about that. When I made my plans, I wasn't thinking things through quite as carefully as I am now. But I did want to show a proof of concept.

How to engage the ultra-wealthy without stigma

Sam: Okay. I guess I'm wondering if there's a path through this wilderness that doesn't stigmatize wealth at all. I mean, the end game for me in the presence of absolute abundance is everyone gets to live like Bill Gates on some level. If we make it to the 22nd century and we've solved the AI alignment problem, we’ll be pulling wealth out of the ether. Essentially, if we could have Deutsch's universal constructors building every machine, atom by atom, and we could more or less do anything we want, then this can't be based on an ethic where wealth is stigmatized. 

What should have opprobrium attached to it is a total disconnection from the suffering of other people and comfort with the more shocking disparities in wealth that we see all around us. Once a reasonably successful person signs on to the effective altruism ethic and begins thinking about his or her life in terms of earning to give, on some level, we could see a flywheel effect, where one's desire to be wealthy actually amplifies one's commitment to giving. In part, the reason why you would continue working is because you have an opportunity to give so much money away and do so much good. It kind of purifies one’s earning in the first place. 

I can imagine most wealthy people get to a point where they're making enough money so that they don't have to worry about money anymore. And then there's this question: Why am I making all of this money? Why am I still working? And the moment they decide to give a certain amount of money away a year, just algorithmically, then they might feel that if that amount keeps going up, that is a good thing. So, I can get out of bed in the morning and know that today if I’m donating 10%, one day out of 10 is given over wholly to solving the worst suffering, saving the most lives, or mitigating the worst long-term risk. And if it's 20%, it's two days out of 10, and if it's 30% is three days out of 10. They could even dial it up. 

Let's say somebody is making $10 million a year and thinks, “Okay, I can sign on and give 10% of my income away to charity. That sounds like the right thing to do.” And he's persuaded that this should be the minimum. But he then aspires to scale it up as he earns more money. Maybe this would be the algorithm: For each million he makes more a year, he adds a percentage. So if he's making $14 million one year, he'll give 14% of his income away. And if it's $50 million, he'll give 50% away. And obviously, if the minimum he wants to make is, say, $9 million a year, then he can give up to 91% of $100 million a year.

I can imagine being a very wealthy person who, as you're scaling one of these outlier careers, would find it fairly thrilling to be the person who's making $100 million that year, knowing that you're going to give 91% of that away to the most effective charities. And you might not be the person who would have seen any other logic in driving to that kind of wealth when you were making $10 million a year, because $10 million a year was good enough. Obviously, you can live on that! You know that nothing materially is going to change for you as you make more money, but because you plugged into the concept of earning to give, in some ways the greater commitment to earning is leveraged by a desire to maintain a wealthy lifestyle. I guess this person does want $9 million a year, but now they're much wealthier than that — and giving away much more money. 

I'm just trying to figure out how we can capture the imagination of people who would see the example of Bill Gates and say, “Okay, that's the sweet spot,” as opposed to any kind of example that, however subtly, stigmatizes being wealthy in the first place.

Will: Hmm. Yeah. I think these are good points and it's true, I think, that the stigma around wealth, per se, is not a good thing. If you build a company that's doing good stuff, and people like the product and get value from it, and you get wealthy as a result of that, that's a good thing. Obviously there are some people who make enormous amounts of money doing bad things, like selling opioids or building factory farms. But I don't think that's the majority. 

It's kind of like optimal taxation theory, but the weird thing is that you're imposing the tax on yourself. Depending on your psychology, if you say, “I'm going to give 100% as the highest tax rate,” you're not incentivized to earn anymore. And so, the precise amount that you want to give is quite sensitive to this question of how motivated you’ll be to earn more. 

In my own case, it's very clear that the way I'm going to do good is not primarily via my donations. So perhaps this disincentive effect is not very important. But if my aim were to get as rich as possible, then I need to look inside my own psychology to figure out how much, especially over the entire course of my life, I can be motivated by pure altruism versus self-interest. And I strongly doubt that the optimal tax state, via my donations, would be 100%. It would be something in between.

Sam: That's what I'm fishing for here. And by no means am I convinced that I'm right. But I'm wondering if, in addition to all of the other things you want for yourself and the world, as revealed by this conversation, your primary contribution to doing good in the world might in fact be your ideas and your ability to get them out there. You've had an effect on me and I'm going to have my effect on my audience, and other conversations like this have an effect. And so there's no question that you are inspiring people to marshall their resources in these directions and think more clearly about these issues. 

But what if it were also the case that you secretly really wanted to own a Ferrari. You would actually make different decisions, such that in addition to all of the messaging, you would become a very wealthy person who could give away a lot of money.

Will: Yeah. That could be the case if I were planning to earn to give. I think a fairly common figure for people who are going to earn to give via entrepreneurship, or through other high-end careers, is 50%. They plan to give half of what they earn, at least once they start earning a significant amount. That has seemed to work pretty well, based on the people I know. It's also, notably, the figure that Bill Gates uses for his giving pledge, where billionaires join the giving pledge if they give at least 50% of their wealth.

Sam: Most of that pledge, if I'm not mistaken, is pushed off to the end of their life. They're just imagining that they're going to give it to charity upon their death.

Will: You are allowed to do that. I don't know the proportions. It varies. Tech founders tend to give earlier than other sorts of people. I'm actually a bit confused about what pledging 50% of your wealth means. If I am a billionaire one year, and then I lose half my money, and then I have $500 million the next year, do I have to give half of that? Or do I have to give half of the amount when I pledged, which would have been all of my money? The details of it confuse me a bit.

Anyway, it is the case that you can fulfill your pledge completely by donating entirely after your death. And there are questions about how often people actually fulfill these pledges. But I really do want to say that that's also quite reasonable. Different people have different attitudes toward money.

I think it's a very rare person indeed who can be entirely motivated by pure altruism at all times, because we're talking about motivation over decades. We're talking about every single day. I think that's very hard. And if someone instead wants to pick a percentage to give, that seems like a sensible way to go. And you want to sustain [your giving]. [You want to avoid situations where] moving from, say, 50% to 60% means that your desire to give burns out, and you go do something else, that's fairly bad indeed. I think you want to avoid having an attitude towards giving that makes you feel or say, “Oh yeah, I'm giving this amount, but it's just so hard. And I really don't like my life.” Giving shouldn’t be unpleasant. That is not an inspiring message. 

Julia Wise, who is a wonderful member of the effective altruism community, has a wonderful blog post called Cheerfully where she talks about having kids and [proposes that] what you want is to model feeling that your life is great, and you can say, “I'm able to [donate] and I'm still having a really wonderful life.” That's certainly how I feel about my life. And for many people who are going into these higher-earning careers, [the goal may be to] say, “I'm donating 50%, and my life is still absolutely awesome. In fact, it's better as a result of the amount that I'm donating.” That's the sweet spot that I think you want to hit.

The case for public giving

Sam: There's another issue around how public to be around one's giving. You and I are having a public conversation about all of this, and we’re violating a norm (or a pseudo-norm) that we've all inherited around generosity and altruism. That norm suggests that the highest form of generosity is to give anonymously. There's a Bible verse about how you don't want to wear your virtue on your sleeve. You don't want to advertise your generosity, because that conveys that you're doing it for reasons of self aggrandizement — to enhance your reputation, or because you want your name on the side of a building. If you were really just connected to the cause of doing good, [the common narrative goes,] you would do all of it silently. People would find out after your death (or maybe never) that you were the one who had secretly donated millions of dollars to cure some terrible disease, or to buy bednets.

And yet you and I have flipped that ethic on its head, because it seems to be important to change people's thinking, and the only way to do that is to really discuss these issues. And what's more, we're leveraging a concern about reputation from the opposite side, by recognizing that taking a pledge has psychological consequences. When you publicly commit to do something, it not only advertises to people that this is the sort of project a human being can become enamored of, but also you have a reputational cost to worry about should you renege on your offer. 

So talk for a few minutes about the significance of [openly discussing giving].

Will: Yeah. I think the public aspect is very important for the reason that you mentioned earlier: Take the amount of good that you're going to do in your life via donations, and then ask, “Can I convince one other person to do the same?” If so, you've doubled your impact and you've done your life's work over again. And I think people can possibly do that many times over, at least in the world today, by being an inspirational role model for others. 

And so I think this religious tradition of keeping your generosity a secret looks pretty bad from an outcome-oriented perspective. And I think you need to be careful about how you're doing it. Do you want to be effective in your communication, as well as your giving? 

It is notable that Peter Singer made these arguments around giving for almost four decades with comparatively little uptake, certainly compared to the last 10 years of the effective altruism movement. And my best hypothesis is that a framing that appeals to guilt lowers motivation. You don't often start doing things on the basis of guilt. We’ve moved to [messaging centered on] inspiration and say, “No, this is an amazing opportunity we have.” 

This is a norm that I really want to change in the long run. I would like it to become common sense that you use a significant part of your resources to help other people. And we will only have that sort of cultural change if people are public about what they're doing, and are able to say, “Yes, this is something I'm doing. I'm proud of it. I think you should consider doing it too.” This is the world I want to see.

Sam: Well, you have certainly gotten the ball rolling in my life, and it's something that I'm immensely grateful for. And I think this is a good place to leave it. Perhaps we can build out further lessons just based on frequently asked questions that come in in response to what we've said here. I think that will be the right way to proceed.

In the meantime, thank you for doing this. I think you're aware of how many people you're affecting, but it's still early days. And I think it will be very interesting to see where all of this goes. I know what it's like to experience a tipping point around these issues. And I have to think that many people listening to us will have a similar experience, one day or another, and you will have occasioned it, so thank you for what you're doing.

Will: Well, thank you for taking the pledge and getting involved. I'm excited to see how these ideas develop over the coming years.