Paper summary: Moral demands and the far future (Andreas Mogensen)
By Global Priorities Institute, Rhyss @ 2022-04-29T15:54 (+21)
This is a linkpost to https://globalprioritiesinstitute.org/summary-moral-demands-and-the-far-future/
Note: The Global Priorities Institute (GPI) has started to create summaries of some working papers by GPI researchers with the aim to make our research more accessible to people outside of academic philosophy (e.g. interested people in the effective altruism community). We welcome any feedback on the usefulness of these summaries.
Summary: Moral demands and the far future
This is a summary of the GPI Working Paper “Moral demands and the far future” by Andreas Mogensen. The summary was written by Rhys Southan.
Consequentialism is the view that good and right coincide: right actions are those which maximise good and minimise bad. The best-known form of consequentialism is utilitarianism. By inviting morality to override all else in our lives, utilitarianism hence inspires what is known as the demandingness objection: that utilitarianism asks far too much of us and so is unacceptable as a moral theory.
In “Moral demands and the far future”, Mogensen argues that discussions on demandingness in moral philosophy have either misunderstood the problem or failed to recognise important dimensions of it. The potential value of the far future brings these oversights into focus. If this is properly taken into account, various aspects of the moral demandingness debate might need to be revised. Some of the arguments that fall apart under this consideration include: latitude for self-consideration, utilitarianism's supposedly reduced demandingness in “morally normal words”, “fair share” arguments, and passive burdens on those who suffer in the absence of aid.
The value of the future
Since at least Singer’s 1972 paper “Famine, Affluence, and Morality,” morality’s most excessive alleged demands were thought to come from the power of the relatively privileged to improve the lives of poorer people across the world. Recently, however, moral philosophers have started thinking that the huge number of possible future people impose an even greater moral burden on us (Beckstead 2013, Ord 2020, Greaves & MacAskill 2021). In short, there are so many possible beings who could exist throughout the future that their interests outweigh ours due to their sheer overwhelming numbers.
Moral demands and the far future in philosophy and economics
The idea that any given generation may need to prioritise future generations above all else is new to philosophy, but economists have debated this for around a century. Economists frame the question as how much each generation must save to optimise economic output. Their models suggest that if we do not discount the value of future generations at all, every generation should save anywhere between 50% to 97.5% of their net income for optimal intergenerational growth. This seems excessive.
Some economists therefore suggest “pure time discounting”—treating the good and bad things in the lives of future people as less significant solely because future people are born later. However, it is hard to think of a principled justification for this, and even if there were one, it would not really help, as these future goods and bads must be discounted to unbelievably low levels to avoid excessive savings demands.
A second approach could be to more strongly reject inequality. This might allow us to favour ourselves over future generations if we think future generations are generally richer than earlier ones. However, this ultra-strong rejection of inequality must be wildly disproportionate to cancel demands for improving the far future, which just shifts excessive demands back to our own time. Everyone now would be required to give up most of their resources to help worse-off people by the tiniest amounts, demanding even more than global-poverty-centric utilitarianism was originally accused of doing.
Is this just economics being overly demanding by aiming for optimisation instead of sufficiently decent outcomes? We might hope philosophy could help us here. Unfortunately, much of what philosophers have done to address moral demandingness falls apart once we recognize the far future of sentient life as the source of our moral demands.
Allowing self-consideration
One well-known suggestion for reducing moral demandingness is to abandon utilitarianism’s impartiality and allow everyone to weigh their own personal interests more heavily. This implies, for instance, that if someone is mildly hungry, it is not morally wrong for them to eat even though there is someone else who is hungrier.
This does appear to reduce demands on the rich to help poorer contemporaries. But it parallels the problem with pure time discounting in that it cannot put a dent in demands from the far future unless we tilt the balance obscenely in our favour. We would need to think it acceptable to weigh our own lives a hundred million times that of a future life in order to favour our own interests over those of future people. If we suppose future beings are very much like us, just with different cultures and more advanced technologies, this is very hard to justify.
Fair shares
Utilitarianism assumes that demands of beneficence depend entirely on how much good can be done. In this way, it incentivises moral freeloading; if some people do very little good, others who have already done a lot are still morally obliged to pick up their slack. It would seemingly be less demanding, and fairer, if moral obligations were divided by how much each of us would need to do if everyone contributed—and then left those obligations fixed despite the reality of widespread moral laziness and thus much more good to be done. But this backfires when moral commands are beamed back to us from the far future.
Full moral compliance includes compliance of future generations. If we expect upcoming generations to let the world implode, there would not be much value in the future no matter what we do, so we might as well focus on ourselves. If we instead imagine that all generations from now on will work devoutly to extend and improve sentient existence, it is much more likely this goal will be achieved—which ironically increases the overwhelming obligation to help achieve it. Rather than relieve us of obligations, imagining full moral compliance increases the expected value of the far future and thus our obligations regarding it.
Morally normal worlds
Some philosophers argue utilitarianism only seems demanding because we are in an unusually bad world. If we were in a “morally normal” world which already had more equitable wealth distribution, less oppressive institutions, and was not in a constant state of emergency, maximising the good and minimising the bad would not be so hard.
Again, having to think about the far future undermines this argument. An overwhelmingly high value in improving the far future need not imply moral dysfunction. Perhaps the future might be good, just and equitable without our interventions, but could be even more glorious and long-lasting if we devote ourselves to its betterment. The demand to devote ourselves to its betterment remains.
Passive burdens
Another way of questioning utilitarianism’s demandingness is to point out that while it may seem to place stifling burdens on relatively privileged people to help the worse-off, these “active” burdens are minor compared to “passive” burdens on the less fortunate who are left to suffer in poverty. In practice, then, utilitarianism should reduce burdens overall by compelling the rich to relieve the burdens of the poor.
This argument obviously has global wealth disparity in mind. Reckoning with the far future upends this argument in at least two ways. One is that utilitarians are now expected to ignore the suffering of their poorer contemporaries to focus their attention on the not-yet-existent. A second is that improving the value of the far future by increasing the lifespan of sentient existence could have the unintended consequence of increasing future burdens as well. Even if average well-being rises dramatically in the future, we might expect some future lives will be miserable out of sheer bad luck. Increasing the number of future individuals by reducing existential risks would therefore also increase the absolute amount of harms and overall bad lives there will be.
Conclusion
Re-examining the demandingness objection to utilitarianism in light of the future’s vast potential undermines some previous arguments in defence of utilitarianism. Then again, rejecting utilitarianism does not seem to help us. At this point, philosophers have just started to recognize some of the problems that arise when we take the interests of future people into account. There is clearly a lot more work to be done.
References
Nicholas Beckstead (2013). On the Overwhelming Importance of Shaping the Far Future. PhD thesis, Rutgers University.
Paul Christiano (2014). We can probably influence the far future. Rational Altruist.
Hilary Greaves & William MacAskill (2021). The case for strong longtermism. GPI Working Paper (No. 5-2021).
Andreas Mogensen (2020). Moral demands and the far future. Philosophy and Phenomenological Research, 1–19. doi:10.1111/phpr.12729
Toby Ord (2020). The Precipice: Existential Risk and the Future of Humanity. London: Bloomsbury.
Peter Singer (1972). Famine, affluence, and morality. Philosophy and Public Affairs, 1(3), 229–243.
Linch @ 2022-04-29T23:29 (+2)
Thanks so much for the summary. I liked the explication.
Some philosophers argue utilitarianism only seems demanding because we are in an unusually bad world. If we were in a “morally normal” world which already had more equitable wealth distribution, less oppressive institutions, and was not in a constant state of emergency, maximising the good and minimising the bad would not be so hard.
Again, having to think about the far future undermines this argument. An overwhelmingly high value in improving the far future need not imply moral dysfunction. Perhaps the future might be good, just and equitable without our interventions, but could be even more glorious and long-lasting if we devote ourselves to its betterment. The demand to devote ourselves to its betterment remains.
I've only read your summary and the linked section in the original paper and haven't read the references, but (if I understand the argument correctly and the references don't cover nuances that I've missed), I think this is wrong.
As I understand it, there are multiple ways in which utilitarianism can be "too demanding." Two seem salient to me (there might well be others):
- In the limit, utilitarianism does not permit any notion of practical free will, or supererogatory actions ("everything that is not obligatory is forbidden").
- If I understand the first paragraph correctly, this is not the notion of demandingness that is being contested here.
- Even if you relax the limits of utilitarianism to a much weaker degree (e.g., something like "we only have a moral obligation to do actions to greatly benefit others, with at most relatively minor costs to ourselves," we still have strong moral duties that will seem at-odds compared to common-sense morality (e.g. maybe we're obligated to donate >50% of our income to global poverty causes)
Since nobody is debating that #1 is too demanding, the conversation is primarily about #2.
The new argument is that from a far-future perspective, even if we are in a "morally normal" world, we may still have what appears to be extraordinary obligations even under fairly weak versions of utilitarianism. I think this is wrong, because a) our world is clearly morally abnormal, b) most observers not-too-dissimilar-from-us are in worlds that are much closer to intuitive conceptions of "morally normal" (i.e., have substantially more relaxed moral duties as a result).
I think b) is true for two reasons:
1. We appear to be in an unusually early world in the lifecycle of Earth-originating observers. Almost all of our (in-expectation) descendants will have a weaker moral obligation than us, because they cannot (in expectation) affect the future nearly as much as we could. Put another way, the Ramsey rules are much less relevant in equilibrium, because exponential economic growth will stop within the next few thousand years, never mind most of future history. See Holden's This Can't Go On and Buck's critiques of MacAskill on HoH for more explications of this.
- Note that if you disagree that we are in expectation unusually early observers, whether because of theoretical arguments like the Doomsday argument, or because of practical beliefs about (e.g.) extinction risk, this instead weakens the argument for longtermism and thus also weaken the notion that we have strong longterm moral obligations.
2. It seems probable that most "observers like us" aren't living in basement reality. For most observers knowingly in simulations, or Boltzmann brains, etc., it seems unlikely that utilitarianism has nearly the same moral oomph as it does for us, assuming most of us believe that there's a decently high likelihood that our anthropically weighted status is not in simulations.