A relatively atheoretical perspective on astronomical waste
By Nick_Beckstead @ 2014-08-06T00:55 (+9)
Crossposted from the Global Priorities Project
Introduction
It is commonly objected that the “long-run” perspective on effective altruism rests on esoteric assumptions from moral philosophy that are highly debatable. Yes, the long-term future may overwhelm aggregate welfare considerations, but does it follow that the long-term future is overwhelmingly important? Do I really want my plan for helping the world to rest on the assumption that the benefit from allowing extra people to exist scales linearly with population when large numbers of extra people are allowed to exist?In my dissertation on this topic, I tried to defend the conclusion that the distant future is overwhelmingly important without committing to a highly specific view about population ethics (such as total utilitarianism). I did this by appealing to more general principles, but I did end up delving pretty deeply into some standard philosophical issues related to population ethics. And I don’t see how to avoid that if you want to independently evaluate whether it’s overwhelmingly important for humanity to survive in the long-term future (rather than, say, just deferring to common sense).
In this post, I outline a relatively atheoretical argument that affecting long-run outcomes for civilization is overwhelmingly important, and attempt to side-step some of the deeper philosophical disagreements. It won’t be an argument that preventing extinction would be overwhelmingly important, but it will be an argument that other changes to humanity’s long-term trajectory overwhelm short-term considerations. And I’m just going to stick to the moral philosophy here. I will not discuss important issues related to how to handle Knightian uncertainty, “robust” probability estimates, or the long-term consequences of accomplishing good in the short run. I think those issues are more important, but I’m just taking on one piece of the puzzle that has to do with moral philosophy, where I thought I could quickly explain something that may help people think through the issues.
In outline form, my argument is as follows:
- In very ordinary resource conservation cases that are easy to think about, it is clearly important to ensure that the lives of future generations go well, and it’s natural to think that the importance scales linearly with the number of future people whose lives will be affected by the conservation work.
- By analogy, it is important to ensure that, if humanity does survive into the distant future, its trajectory is as good as possible, and the importance of shaping the long-term future scales roughly linearly with the expected number of people in the future.
- Premise (2), when combined with the standard set of (admittedly debatable) empirical and decision-theoretic assumptions of the astronomical waste argument, yields the standard conclusion of that argument: shaping the long-term future is overwhelmingly important.
A review of the astronomical waste argument and an adjustment to it
The standard version of the astronomical waste argument runs as follows:- The expected size of humanity's future influence is astronomically great.
- If the expected size of humanity's future influence is astronomically great, then the expected value of the future is astronomically great.
- If the expected value of the future is astronomically great, then what matters most is that we maximize humanity’s long-term potential.
- Some of our actions are expected to reduce existential risk in not-ridiculously-small ways.
- If what matters most is that we maximize humanity’s future potential and some of our actions are expected to reduce existential risk in not-ridiculously-small ways, what it is best to do is primarily determined by how our actions are expected to reduce existential risk.
- Therefore, what it is best to do is primarily determined by how our actions are expected to reduce existential risk.
4’. Some of our actions are expected to change our development trajectory in not-ridiculously-small ways.
5’. If what matters most is that we maximize humanity’s future potential and some of our actions are expected to change our development trajectory in not-ridiculously-small ways, what it is best to do is primarily determined by how our actions are expected to change our development trajectory.
6’. Therefore, what it is best to do is primarily determined by how our actions are expected to change our development trajectory.
The basic thought here is that what the astronomical waste argument really shows is that future welfare considerations swamp short-term considerations, so that long-term consequences for the distant future are overwhelmingly important in comparison with purely short-term considerations (apart from long-term consequences that short-term consequences may produce).Astronomical waste may involve changes in quality of life, rather than size of population
Often, the astronomical waste argument is combined with the idea that the best way to minimize astronomical waste is to minimize the probability of pre-mature human extinction. How important it is to prevent pre-mature human extinction is a subject of philosophical debate, and the debate largely rests on whether it is important to allow large numbers of people to exist in the future. So when someone complains that the astronomical waste argument rests on esoteric assumptions about moral philosophy, they are implicitly objecting to premise (2) or (3). They are saying that even if human influence on the future is astronomically great, maybe changing how well humanity exercises its long-term potential isn’t very important because maybe it isn’t important to ensure that there are a large number of people living in the future.However, the concept of existential risk is wide enough to include any drastic curtailment to humanity’s long-term potential, and the concept of a “trajectory change” is wide enough to include any small but important change in humanity’s long-term development. And the value of these existential risks or trajectory changes need not depend on changes in the population. For example,
- In “The Future of Human Evolution,” Nick Bostrom discusses a scenario in which evolutionary dynamics result in substantial decreases in quality of for all future generations, and the main problem is not a population deficit.
- Paul Christiano outlined long-term resource inequality as a possible consequence of developing advanced machine intelligence.
- I discussed various specific trajectory changes in a comment on an essay mentioned above.
There is limited philosophical debate about the importance of changes in the quality of life of future generations
The main group of people who deny that it is important that future people exist have “person-affecting views.” These people claim that if I must choose between outcome A and outcome B, and person X exists in outcome A but not outcome B, it’s not possible to affect person X by choosing outcome A rather than B. Because of this, they claim that causing people to exist can’t benefit them and isn’t important. I think this view suffers from fatal objections which I have discussed in chapter 4 of my dissertation, and you can check that out if you want to learn more. But, for the sake of argument, let’s agree that creating “extra” people can’t help the people created and isn’t important.A puzzle for people with person-affecting views goes as follows:
Suppose that agents as a community have chosen to deplete rather than conserve certain resources. The consequences of that choice for the persons who exist now or will come into existence over the next two centuries will be “slightly higher” than under a conservation alternative (Parfit 1987, 362; see also Parfit 2011 (vol. 2), 218). Thereafter, however, for many centuries the quality of life would be much lower. “The great lowering of the quality of life must provide some moral reason not to choose Depletion” (Parfit 1987, 363). Surely agents ought to have chosen conservation in some form or another instead. But note that, at the same time, depletion seems to harm no one. While distant future persons, by hypothesis, will suffer as a result of depletion, it is also true that for each such person a conservation choice (very probably) would have changed the timing and manner of the relevant conception. That change, in turn, would have changed the identities of the people conceived and the identities of the people who eventually exist. Any suffering, then, that they endure under the depletion choice would seem to be unavoidable if those persons are ever to exist at all. Assuming (here and throughout) that that existence is worth having, we seem forced to conclude that depletion does not harm, or make things worse for, and is not otherwise “bad for,” anyone at all (Parfit 1987, 363). At least: depletion does not harm, or make things worse for, and is not "bad for," anyone who does or will exist under the depletion choice.The seemingly natural thing to say if you have a person-affecting view is that because conservation doesn’t benefit anyone, it isn’t important. But this is a very strange thing to say, and people having this conversation generally recognize that saying it involves biting a bullet. The general tenor of the conversation is that conservation is obviously important in this example, and people with person-affecting views need to provide an explanation consonant with that intuition.
Whatever the ultimate philosophical justification, I think we should say that choosing conservation in the above example is important, and this has something to do with the fact that choosing conservation has consequences that are relevant to the quality of life of many future people.
Intuitively, giving N times as many future people higher quality of life is N times as important
Suppose that conservation would have consequences relevant to 100 times as many people in case A than it would in case B. How much more important would conservation be in case A? Intuitively, it would be 100 times more important. This generally fits with Holden Karnofsky’s intuition that a 1/N probability of saving N lives is about as important as saving one life, for any N:I wish to be the sort of person who would happily pay $1 for a robust (reliable, true, correct) 10/N probability of saving N lives, for astronomically huge N - while simultaneously refusing to pay $1 to a random person on the street claiming s/he will save N lives with it.More generally, we could say:
Principle of Scale: Other things being equal, it is N times better (in itself) to ensure that N people in some position have higher quality of life than other people who would be in their position than it is to do this for one person.
I had to state the principle circuitously to avoid saying that things like conservation programs could “help” future generations, because according to people with person-affecting views, if our "helping" changes the identities of future people, then we aren't "helping" anyone and that's relevant. If I had said it in ordinary language, the principle would have said, “If you can help N people, that’s N times better than helping one person.” The principle could use some tinkering to deal with concerns about equality and so on, but it will serve well enough for our purposes.The Principle of Scale may seem obvious, but even it would be debatable. You wouldn’t find philosophical agreement about it. For example, some philosophers who claim that additional lives have diminishing marginal value would claim that in situations where many people already exist, it matters much less if a person is helped. I attack these perspectives in chapter 5 of my dissertation, and you can check that out if you want to learn more. But, in any case, the Principle of Scale does seem pretty compelling—especially if you’re the kind of person that doesn’t have time for esoteric debates about population ethics—so let’s run with it.
Now for the most questionable steps: Let’s assume with the astronomical waste argument that the expected number of future people is overwhelming, and that it is possible to improve the quality of life for an overwhelming number of future people through forward-thinking interventions. If we combine this with the principle from the last paragraph and wave our hands a bit, we get the conclusion that shifting quality of life for an overwhelming number of future people is overwhelmingly more important than any short term consideration. And that is very close to what the long-run perspective says about helping future generations, though importantly different because this version of the argument might not put weight on preventing extinction. (I say “might not” rather than “would not” because if you disagree with the people with person-affecting views but accept the Principle of Scale outlined above, you might just accept the usual conclusion of the astronomical waste argument.)
Does the Principle of Scale break down when large numbers are at stake?
I have no argument that it doesn’t, but I note that (i) this wasn’t Holden Karnofsky’s intuition about saving N lives, (ii) it isn’t mine, and (iii) I don’t really see a compelling justification for it. The main reason I can think of for wanting it to break down is not liking the conclusion that affecting long-run outcomes for humanity is overwhelmingly important in comparison with short-term considerations. If you really want to avoid the conclusion that shaping the long-term future is overwhelmingly important, I believe it would be better to accommodate this idea by appealing to other perspectives and a framework for integrating the insights of different perspectives—such as the one that Holden has talked about—rather than altering this perspective. For such people, my hope would be that reading this post would cause you to put more weight on the perspectives that place great importance on the future.Summary
To wrap up, I’ve argued that:- Reducing astronomical waste need not involve preventing human extinction—it can involve other changes in humanity’s long-term trajectory.
- While not widely discussed, the Principle of Scale is fairly attractive from an atheoretical standpoint.
- The Principle of Scale—when combined with other standard assumptions in the literature on astronomical waste—suggests that some trajectory changes would be overwhelmingly important in comparison with short-term considerations. It could be accepted by people who have person-affecting views or people who don’t want to get too bogged down in esoteric debates about moral philosophy.
undefined @ 2014-08-07T07:54 (+2)
Nice post. It's also worth noting that this version of the far-future argument appeals even to negative utilitarians, strongly anti-suffering prioritarians, Buddhists, antinatalists, and others who don't think it's important to create new lives for reasons other than holding a person-affecting view.
I also think even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future. The most likely so-called "extinction" event in my mind is human replacement by AIs, but AIs would be their own life forms with their own complex galaxy-colonization efforts, so I think work on AI issues should be considered part of "changing the direction of the future" rather than "making sure there is a future".