The Moral Value of the Far Future

By Holden Karnofsky @ 2014-07-03T12:43 (+2)

This is a linkpost to https://www.openphilanthropy.org/blog/moral-value-far-future

Note: The Open Philanthropy Project was formerly known as GiveWell Labs. Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.

A popular idea in the effective altruism community is the idea that most of the people we can help (with our giving, our work, etc.) are people who haven’t been born yet. By working to lower global catastrophic risks, speed economic development and technological innovation, and generally improve people’s resources, capabilities, and values, we may have an impact that (even if small today) reverberates for generations to come, helping more people in the future than we can hope to help in the present.

This belief is sometimes coupled with a belief that the most important goal of an altruist should be to reduce “existential risk”: the risk of an extreme catastrophe that causes complete human extinction (as, for example, a sufficiently bad pandemic - or extreme unexpected developments related to climate change - could theoretically do), and thus curtails large numbers of future generations.

We are often asked about our views on these topics, and this post attempts to lay them out. There is not complete internal consensus on these matters, so I speak for myself, though most staff members would accept most of what I write here. In brief:

Those interested in related materials may wish to look at two transcripts of recorded conversations I had on these topics: a conversation on flow-through effects with Carl Shulman, Robert Wiblin, Paul Christiano, and Nick Beckstead and a conversation on existential risk with Eliezer Yudkowsky and Luke Muehlhauser.

The importance of the far future

As discussed previously, I believe that the general state of the world has improved dramatically over the past several hundred years. It seems reasonable to state that the people who made contributions (large or small) to this improvement have made a major difference to the lives of people living today, and that when all future generations are taken into account, their impact on generations following them could easily dwarf their impact in their own time.

I believe it is reasonable to expect this basic dynamic to continue, and I believe that there remains huge room for further improvement (possibly dwarfing the improvements we’ve seen to date). I place some probability on global upside possibilities including breakthrough technology, space colonization, and widespread improvements in interconnectedness, empathy and altruism. Even if these don’t pan out, there remains a great deal of room for further reduction in poverty and in other causes of suffering.

In Astronomical Waste, Nick Bostrom makes a more extreme and more specific claim: that the number of human lives possible under space colonization is so great that the mere possibility of a hugely populated future, when considered in an “expected value” framework, dwarfs all other moral considerations. I see no obvious analytical flaw in this claim, and give it some weight. However, because the argument relies heavily on specific predictions about a distant future, seemingly (as far as I can tell) backed by little other than speculation, I do not consider it “robust,” and so I do not consider it rational to let it play an overwhelming role in my belief system and actions. (More on my epistemology and method for handling non-robust arguments containing massive quantities here.) In addition, if I did fully accept the reasoning of “Astronomical Waste” and evaluate all actions by their far future consequences, it isn’t clear what implications this would have. As discussed below, given our uncertainty about the specifics of the far future and our reasons to believe that doing good in the present day can have substantial impacts on the future as well, it seems possible that “seeing a large amount of value in future generations” and “seeing an overwhelming amount of value in future generations” lead to similar consequences for our actions.

Catastrophic risk reduction vs. doing tangible good

Many people have cited “Astronomical Waste” to me as evidence that the greatest opportunities for doing good are in the form of reducing the risks of catastrophes such as extreme climate change, pandemics, problematic developments related to artificial intelligence, etc. Indeed, “Astronomical Waste” seems to argue something like this:

For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.

I have always found this inference flawed, and in my recent discussion with Eliezer Yudkowsky and Luke Muehlhauser, it was argued to me that the “Astronomical Waste” essay never meant to make this inference in the first place. The author’s definition of existential risk includes anything that stops humanity far short of realizing its full potential - including, presumably, stagnation in economic and technological progress leading to a long-lived but limited civilization. Under that definition, “Minimize existential risk!” would seem to potentially include any contribution to general human empowerment.

I have often been challenged to explain how one could possibly reconcile (a) caring a great deal about the far future with (b) donating to one of GiveWell’s top charities. My general response is that in the face of sufficient uncertainty about one’s options, and lack of conviction that there are good (in the sense of high expected value) opportunities to make an enormous difference, it is rational to try to make a smaller but robustly positive difference, whether or not one can trace a specific causal pathway from doing this small amount of good to making a large impact on the far future. A few brief arguments in support of this position:

For one who accepts these considerations, it seems to me that:

With that said:

Global catastrophic risk reduction as a promising area for philanthropy

I see global catastrophic risk reduction as a promising area for philanthropy, for many of the reasons laid out in a previous post:

I believe that declaring global catastrophic risk reduction to be the clearly most important cause to work on, on the basis of what we know today, would not be warranted. A broad variety of other causes could be superior under reasonable assumptions. Scientific research funding may be far more important to the far future (especially if global catastrophic risks turn out to be relatively minor, or science turns out to be a key lever in mitigating them). Helping low-income people (including via our top charities) could be the better area to work in if our views regarding the far future are fundamentally flawed, or if opportunities to substantially mitigate global catastrophic risks turn out to be highly limited. Working toward better public policy could also have major implications for both the present and the future, and having knowledge of this area could be an important tool no matter what causes we end up working on. More generally, by exploring multiple promising areas, we create better opportunities for “unknown unknown” positive developments, and the discovery of outstanding giving opportunities that are difficult to imagine given our current knowledge. (We also will become more broadly informed, something we believe will be very helpful in pitching funders on the best giving opportunities we can find - whatever those turn out to be.)