QB: How Much do Future Generations Matter?
By Richard Y Chappellđ¸ @ 2024-10-18T15:22 (+26)
This is a linkpost to https://www.goodthoughts.blog/p/qb-how-much-do-future-generations
[#4 in my series of excerpts from Questioning Beneficence: Four Philosophers on Effective Altruism and Doing Good.][1]
How much should we care about future people? Total utilitarians answer, âEqually to our concern for presently-existing people.â Narrow person-affecting theorists answer, âNot at allââat least in a disturbingly wide range of cases.[2] I think the most plausible answer is something in-between.
Person-Directed and Impersonal Reasons
Total utilitarianism is the view that we should promote the sum total of well-being in the universe. In principle, this sum could be increased by either improving peopleâs lives or by adding more positive lives into the mix (without making others worse off). I agree that both of these options are good, but it seems misguided to regard them as equally good. If you see a child drowning, resolving to have an extra child yourself is not (contra total utilitarianism) an adequate substitute for saving the existing child. In general, weâre apt to think, we have stronger reasons to make people happy than to make happy people.
On the other hand, the narrow person-affecting view can seem disturbing and implausibly extreme in its own way. Since it regards happy future lives as a matter of moral indifference, it implies thatâif it would make us happierâitâd be worth preventing a future utopia by sterilizing everyone alive today and burning through all the planetâs resources before the last of us dies off. Utopia is no better than a barren rock, on this view, so if faced with a choice between the two, weâve no moral reason to sacrifice our own interests to bring about the former.
Our own valueâand that of our childrenâare seen as merely conditional: given that we exist, itâs better to make us better-off, just like if you make a promise, then you had better keep it. But thereâs no reason to make promises just in order to keep them: kept promises are not in themselves or unconditionally good. And narrow person-affecting theorists think the same of individual persons. Bluntly put: we are no better than nothing at all, on this bleak view.
Fortunately, we do not have to choose between total utilitarianism and the narrow person-affecting view. We can instead combine the life-affirming aspects of total utilitarianism with extra weight for those who exist antecedently. On a commonsense hybrid approach, we have both (1) strong person-directed reasons to care especially about the well-being of antecedently existing individuals, and (2) weaker impersonal reasons to improve the world by bringing additional good lives into existence. When the amount of value at stake is sufficiently large, even reasons of the intrinsically weaker kind may add up to be very significant indeed. This can explain why avoiding human extinction should be a very high priority on a wide range of reasonable, life-affirming views, without depending on anything as extreme as total utilitarianism.
In Defense of Good Lives
There are three other common reasons why people are tempted to deny value to future lives, and theyâre all terrible. First, some worry that we could otherwise be saddled with implausible procreative obligations. Second, some think that it allows them to avoid the paradoxes of population ethics. And, third, some are metaphysically confused about how non-existent beings could generate reasons. Letâs address these concerns in turn.
Imagine thinking that the only way to reject forced organ donation was to deny value to the lives of individuals suffering from organ failure. That would be daft. Commonsense morality grants us strong rights to bodily integrity and autonomy. However useful my second kidney may be to others, it is my body, and it would be supererogatoryâabove and beyond the call of dutyâfor me to give up any part of it for the greater good of others.
Now, what holds of kidneys surely holds with even greater stringency of uteruses, as being coerced into an unwanted pregnancy would seem an even graver violation of oneâs bodily integrity than having a kidney forcibly removed. So recognizing the value of future people does not saddle us with procreative obligations, any more than recognizing the value of dialysis patients saddles us with obligations to donate our internal organs. Placing our organs in service to the greater good is above and beyond the call of duty. This basic commitment to bodily autonomy can survive whatever particular judgments we might make about which lives contribute to the overall good. It does not give us any reason to deny value to othersâ lives, including future lives.[3]
The second bad argument begins by noting the paradoxes of population ethics, such as Parfitâs âMere Addition Paradox,â which threatens to force us into the âRepugnant Conclusionâ that any finite utopian population A can be surpassed in value by a sufficiently larger population Z of lives that are barely worth living. Without getting into the details, the mere addition paradox can be blocked by denying that good lives are absolutely good at all, and instead regarding different-sized populations as incomparable in value.
But this move ultimately avails us little, for two reasons: (1) it cannot secure the intuitively desirable result that the utopian world A is better than the repugnant world Z; and (2) all the same puzzles about quantity-quality tradeoffs can re-emerge within a single life, where it is not remotely plausible to deny that âmere additionsâ of future time can be of value or increase the welfare value of oneâs life. Since weâre all committed to addressing quantity-quality tradeoffs within a life, we might as well extend whatever solution we ultimately settle upon to the population level too. So thereâs really no philosophical gain to temporarily dodging the issue by denying the value of future lives.
The third argument rests on a simple confusion between absolute and comparative disvalue. Consider Torres:
[T]here canât be anything bad about Being Extinct because there wouldnât be anyone around to experience this badness. And if there isnât anyone around to suffer the loss of future happiness and progress, then Being Extinct doesnât actually harm anyone.
I call this the âEpicurean fallacy,â as it mirrors the notorious reasoning that death cannot harm you because once youâre dead thereâs no longer anyone there to be harmed. Of course, death is not an absolutely bad state to be in (itâs not a state that you are ever in at all, since to be in a state you must exist at that time). Deathâs intrinsic neutrality instead makes you worse off in comparison to the alternative of continued positive existence. And so it goes at a population level: humanityâs extinction, while absolutely neutral, would be awful compared to the alternative of a flourishing future containing immensely positive lives (and thus value). If you appreciate that death can be badâeven tragicâthen you should have no difficulty appreciating the metaphysical possibility that extinction could be even more so. (Though we can imagine worse things than extinction, just as we can imagine worse fates than death.)
An Agnostic Case for Longtermism in Practice
William MacAskill defines Longtermism as âthe idea that positively influencing the longterm future is a key moral priority of our time.â After all, the future is vast. If all goes well, it could contain an astronomical number of wonderful lives. If it goes poorly, it might soon contain no lives at allâor worse, overwhelmingly miserable, oppressed lives. Because the stakes are so high, we have extremely strong moral reasons to prefer better long-term outcomes.
That in-principle verdict strikes me as difficult to deny. The practical question of what to do about it is much less clear, because it may not be obvious what we can do to improve long-term outcomes. But longtermists suggest that there is at least one clear-cut option available, namely: research the matter further. Longtermist investigation is relatively cheap, and the potential upside is immense. So it seems clearly worthwhile to look more into the matter.
MacAskill himself suggests two broad avenues for securing positive longterm impact: (1) contributing to economic, scientific, and (especially) moral progressâsuch as by building a morally exploratory world that can continue to improve over time; and (2) working to mitigate existential risksâsuch as from nuclear war, super-pandemics, or misaligned artificial intelligenceâto ensure that we have a future at all.
This all seems very sensible to me. I personally doubt that misaligned AI will take over the worldâthat sure doesnât seem the most likely outcome. But a bad outcome doesnât have to be the âmost likelyâ in order for it to be prudent to guard against. I donât think any given nuclear reactor is likely to suffer a catastrophic failure, either, but I still think society should invest (some) in nuclear safety engineering, just to be safe.[4] Currently, the amount that our society invests in reducing global catastrophic risks is negligible (as a proportion of global GDP). I could imagine overdoing itâe.g., in a hypothetical neurotic society that invested the majority of its resources into such precautionary measuresâbut, in reality, weâre surely erring in the direction of under-investment.
So, while I donât know precisely what the optimal balance would be between âlongtermistâ and âneartermistâ moral ends, itâs worth noting that we donât need to answer that difficult question in order to at least have a directional sense of where we should go from here. We should not entirely disregard the long-term future: it truly is immensely important. But we (especially non-EAs) currently do almost entirely disregard the long-term future. So it would seem wise to remedy this.
In the subsequent discussion, Arnold and Brennan press me on whether tiny chances of averting extinction could really be worth more than saving many lives for certain. I argue that this result is basically undeniable, given the right kind of (objective) probabilities.
- ^
Note that I havenât bothered to add in most of the footnotes, and Iâve added links that werenât in the printed text.
- ^
They allow that we shouldnât want future individuals to suffer. And they allow that we should prefer any given future individual to be better off rather than existing in a worse-off state. But they think we have no non-instrumental reason to want the happy future individual to exist at all. And also [at least on most such views] no non-instrumental reason to prefer for a happier individual to exist in place of a less well-off, alternative future person. For a general introduction to population ethics, see âPopulation Ethics,â in Chappell, Meissner, and MacAskill 2023.
- ^
This basic argument is further developed in Chappell 2017.
- ^
Of course, thatâs not to endorse pathological regulation that results in effectively promoting coal power over nuclear, or other perverse incentives.
Jakob Lohmar @ 2024-10-20T01:31 (+5)
I agree this is a commonsensical view, but it also seems to me that intuitions here are rather fragile and depend a lot on the framing. I can actually get myself to finding the opposite intuitive, i.e. that we have more reason to make happy people than to make people happy. Think about it this way: you can use a bunch of resources to make an already happy person still happier, or you can use these resources to allow the existence of a happy person who would otherwise not be able to exist at all. (Say, you could double the happiness of the existing person or create an additional person with the same level of happiness.) Imagine you create this person, she has a happy life and is grateful for it. Was your decision to create this person wrong? Would it have been any better not to create her but to make the original person happier yet? Intuitively, I'd say, the answer is 'no'. Creating her was the right decision.
SummaryBot @ 2024-10-18T18:56 (+1)
Executive summary: While total utilitarianism and narrow person-affecting views offer extreme positions on valuing future generations, a more plausible middle ground combines strong person-directed reasons to care about existing individuals with weaker impersonal reasons to bring good lives into existence.
Key points:
- Total utilitarianism and narrow person-affecting views have significant flaws in how they value future lives.
- A hybrid approach balancing person-directed and impersonal reasons avoids these pitfalls while still prioritizing existential risk reduction.
- Common arguments against valuing future lives (procreative obligations, population ethics paradoxes, metaphysical confusion) are refuted.
- Longtermism, which prioritizes positively influencing the long-term future, is difficult to deny in principle but faces practical challenges.
- Investing in research on improving long-term outcomes and mitigating existential risks is a prudent course of action.
- While the optimal balance between "longtermist" and "neartermist" priorities is unclear, increasing consideration of the long-term future is warranted.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.