Saving Lives vs Creating Lives
By Richard Y Chappellšø @ 2022-12-15T00:32 (+45)
This is a linkpost to https://rychappell.substack.com/p/killing-vs-failing-to-create
tl;dr: Total utilitarianism treats saving lives and creating new lives as equivalent (all else equal). This seems wrong: funding fertility is not an adequate substitute for bednets. We can avoid this result by giving separate weight to both person-directed and undirected (or "impersonal") reasons. We have weak impersonal reasons to bring an extra life into existence, while we have both impersonal and person-directed reasons to aid an existing individual. This commonsense alternative to totalism still entails longtermism, as zillions of weak impersonal reasons to bring new lives into existence can add up to overwhelmingly strong reasons to prevent human extinction.
Killing vs Failing to Create
I think the strongest objection to total utilitarianism is that it risks collapsing the theoretical distinction between killing and failing to create. (Of course, there would still be good practical reasons to maintain such a distinction in practice; but I think thereās a principled distinction here that our theories ought to accommodate.) While I think itās straightforwardly good to bring more awesome lives into existence, and so failing to create an awesome life constitutes a missed opportunity for doing good, premature death is not just a āmissed opportunityā for a good future, itās harmful in a way that should especially concern us.
For example, we clearly have much stronger moral reasons to save the life of a young child (e.g. by funding anti-malarial bednets) than to simply cause an extra child to exist (e.g. by funding fertility treatments or incentivizing procreation). If totalism canāt accommodate this moral datum, that would seem a serious problem for the view.
How can we best accommodate this datum? I think there may be two distinct intuitions in the vicinity that Iād want to accommodate:
(1) Something about the intrinsic badness of (undesired) death.
(2) Counting both person-directed and undirected (āimpersonalā) moral reasons.
The Intrinsic Harm of Death
Most of the harm of death is comparative: not bad in itself, but worse than the alternative of living on. Importantly, we only have reason to avoid comparative harms in ways that secure the better alternative. To see this, suppose that if you save a childās life, theyāll live two more decades and then die from an illness that robs them of five decades more life. That latter death is then really bad for them. Does it follow that you shouldnāt save the childās life after all (since it exposes them to a more harmful death later)? Of course not. The later death is worse compared to living the five decades extra, but letting them die now would do them even less good, no matter that the early death ā in depriving them of just two decades of life ā is not āas badā (comparatively speaking) as the later death would be (in a different context with a different point of comparison).
So we should not aim to minimize comparative harms of this sort: that would lead us badly astray. But itās a tricky question whether the harm of death is purely comparative. In āValue Receptaclesā (2015, p. 323), I argued that it plausibly is not:
Besides preventing the creation of future goods, death is also positively disvaluable insofar as it involves the interruption and thwarting of important life plans, projects, and goals. If such thwarting has sufficient disvalue, it could well outweigh the slight increase in hedonic value obtained in the replacement scenario [where one person is āstruck down in the prime of life and replaced with a marginally happier substituteā].
Thwarted goals and projects may make death positively bad to some extent. But the extent must be limited. However tragic it is to die in oneās teens (say), I donāt think one could plausibly say that itās so bad as to render the personās life overall not worth living. The goods of life can fairly easily outweigh the harm of death, I believe.
Itās a tricky question where exactly to draw the line here. Suppose a couple undergoing fertility treatment learns that all of their potential embryos have a genetic defect that would inevitably result in painless death while the child is still very young. That obviously gives the parents strong prudential reasons to refrain from procreating and suffering the immense grief that would soon follow. But if we bracket othersā interests, and focus purely on the interests of the potential child themselves: is it ever the case that an overall-happy life, however short, is not worth living, purely due to the fact of death? I could, of course, imagine a painful death outweighing the happiness of a very short life. But suppose the death is painless, or at any rate is nowhere near to outweighing the prior joy the life contains. Yet it does thwart the childās plans and projects. Is that so bad that it would have been better for them to never exist at all? I find that hard to believe.
For another test: imagine a future society that uses artificial wombs to procreate (and parents arenāt notified until the entire process is successfully completed). Suppose some fetuses have a congenital condition that causes them to painlessly die almost immediately after first acquiring sentience (or whatever is required for morally relevant interests of a sort that makes death harmful for them). How much should the society be willing to invest in diagnostic testing to instead allow the defective embryos to be aborted prior to acquiring moral interests? (Or, at greater cost, to test the gametes prior to fertilization?) Could preventing short but painless existence ever take priority over other societal goals like saving lives and reducing suffering?
We probably canāt give that much weight to the intrinsic harm of (painless) death, if itās never enough to make non-existence look especially desirable in comparison. So I think we may need to look elsewhere to find stronger reasons.
Person-Directed and Undirected Reasons
Much population ethics discourse sets up a false dichotomy between the two extremes of impersonal total utilitarianism and narrow person-affecting views on which weāve no reason to bring happy lives into existence. I find this very strange, since a far more intuitive middle-ground view would acknowledge that we have both person-directed and undirected (or āimpersonalā) reasons.
Failing to create a person does not harm or wrong that individual in the way that negatively affecting their interests (e.g. by killing them as a young adult) does. Contraception isn't murder, and neither is abstinence. Person-directed reasons explain this common-sense distinction: we have especially strong reasons not to harm or wrong particular individuals.
But avoiding wrongs isn't all that matters. There's always some (albeit weaker) reason to positively benefit possible future people by bringing them into a positive existence, even though it doesn't wrong anyone to remain childless by choice.
And when you multiply those individually weak reasons by zillions, you can end up with overwhelmingly strong reasons to prevent human extinction, just as longtermists claim. (This reason is so strong it would plausibly be wrong to neglect or violate it, even though it does not wrong any particular individual. Just as the non-identity problem shows that one outcome can be worse than another without necessarily being worse for any particular individual.)
On this hybrid view, which I defend in more detail in āRethinking the Asymmetryā (2017), we are warranted in some degree of partiality towards the antecedently actual. We have weak impersonal reasons to bring an extra life into existence, while we have both impersonal and person-directed reasons to aid an existing individual (or a future individual who is certain to exist independently of our present decision).
Conclusion
I think this hybrid view is very commonsensical. We all agree that you can harm someone by bringing them into a miserable existence, so thereās no basis for denying that you can benefit someone by bringing them into a happy existence. It would be crazy to claim that there is literally no reason to do the latter. And there is no theoretical advantage to making this crazy claim. (As I explain in āPuzzles for Everyoneā, it doesnāt solve the repugnant conclusion, because we need a solution that works for the intra-personal case ā and whatever does the trick there will automatically carry over to the interpersonal version too.) So the narrow person-affecting view really does strike me as entirely unmotivated.
But, as indicated above, this very natural hybrid view still entails the basic longtermist claim that weāve very strong moral reasons to care about the distant future (and strongly prefer flourishing civilization over extinction). So the notion that longtermism depends on stark totalism is simply a mistake.
Totalism is on the right track when it comes to many big-picture questions, but it is an oversimplification. Failing to create is importantly morally different from killing. We have especially stringent reasons to avoid the latter. (It's an interesting further question precisely how much extra weight we should give to saving lives over creating lives.) But we still have some moral reason to want good lives to come into existence, and that adds up to very strong moral reasons to care about the future of humanity in its entirety.
Ariel Simnegar @ 2022-12-15T14:39 (+5)
You argue that the value added by saving a life is separable into two categories:
- Person-directed: The value added by positively affecting an existing person's interests.
- Undirected: The value added simply by increasing the total amount of happy life lived.
Let's define the "coefficient of undirected value" C between 0 and 1 to be the proportion of value added for undirected reasons, as opposed to person-directed reasons. The totalist view would set C=1, arguing that there is no intrinsic value to helping a particular person. The person-affecting view would set C=0, arguing that it is only coherent to add value when it positively affects an existing person. You argue that this is a false dichotomy, and that C should be "low," i.e. giving low moral weight to interventions which only produce undirected value (e.g. increasing fertility) relative to interventions which produce both categories of value (e.g. saving a life).
I think the totalist view should be lent more credence than you lend it in your post, and that C should be adjusted upwards accordingly by moral uncertainty to be "high." I would endorse the implication that causing a baby to be born who otherwise would not is "close to as good" as saving a life.
Consider choosing between the following situations (in the vein of your post's discussion of the intrinsic harm of death):
- A woman wants a child. You use your instant artificial womb to create an infant for her.
- A woman just gave birth to an infant. The infant is about to painlessly die due to a genetic defect. You prevent that death.
For the sake of argument, let's assume that the woman's interests are identical in both cases. (i.e. the sadness Woman 1 would have had if you didn't make her a child is the same as the sadness Woman 2 would have had if her child painlessly died, and the happiness of both women from being able to raise their child is the same.)
To me, it seems most intuitive that one should have little to no preference between Case 1 and Case 2. The outcomes for both the woman and the child are (by construction) identical. Of course, the value added in Case 1 is undirected, since the child doesn't yet exist for its interests to be positively affected by your decision, and the value added in Case 2 includes both directed and undirected components. If we follow this intuition, we must conclude that C=1, or C is very close to 1. Even if you still have a significant intuitive preference for Case 2, let's say you're choosing between two occurrences of Case 1 and one occurrence of Case 2. Many now would switch to prefer the two occurrences of Case 1, since now we have two happy mothers and children versus one. However, this still implies C>0.5. If we accept the idea that Case 1 is close to as good as Case 2, then it seems hard to escape the conclusion that C is "high," and we should adjust the way we think about increasing fertility accordingly.
Let me know what you think!
Richard Y Chappell @ 2022-12-15T15:47 (+3)
Interesting! Thanks for this.
I should clarify that I'm not committed to C being "low"; but I do think it should be somewhere between 0 and 1 (rather than at either extreme). I don't have any objection to C>0.5, for example, though I expect many people would find it more intuitive to place it somewhat lower. I'd probably be most comfortable somewhere around 0.5 myself, but I have very wide uncertainty on this, and could probably easily be swayed anywhere in the range from 0.2 - 0.8 or so. It seems a really hard question!
To me, it seems most intuitive that one should have little to no preference between Case 1 and Case 2. The outcomes for both the woman and the child are (by construction) identical.
My thought is that what we (should) care about may vary between the cases, and change over time (as new people come into existence). Roughly, the intuition is that we should care especially about individuals who do or will exist (independently of our present actions). So once a child exists (or will exist), we may have just as much reason to be thankful for their creation as we do for their life being saved; so I agree the two options don't differ in retrospect. But in prospect, we have (somewhat) less reason to bring a new person into existence than to save an already-existing person. And I take the "in prospect" perspective to be the one that's more decision-relevant.
Ariel Simnegar @ 2022-12-15T15:53 (+1)
Thanks for the clarification, and for your explanation of your thought process!
Vasco Grilo @ 2022-12-15T21:57 (+4)
Hi Richard,
Thanks for sharing your thoughts.
Total utilitarianism treats saving lives and creating new lives as equivalent (all else equal). This seems wrong: funding fertility is not an adequate substitute for bednets.
It is quite unclear to me whether total utilitarianism treats saving lives and creating new lives as equivalent, because all else seems far from equal. For example:
- Saving a life prevents the suffering of death, and that of many people besides the person whose life was saved.
- Saving a life prevents resources from being wasted.
- The net effect of saving and creating a life on population size may well be different, as lives are saved when they were a certain positive age, but are created with age 0.
If the goal is testing total utilitarianism, I believe we should search for situations in which total utility is constant, but we still think one of them is better than the other. I do not think this can be achieved with real world examples, as reality is too complex. So I think it is better to consider thought experiments. For instance:
- 100 people live for 100 years with an annual utility per capita of 10.
- 100 people live for 50 years with an annual utility per capita of 10, and thanks to a life-saving intervention are able to live for 50 more years with an annual utility per capita of 10.
- 100 people live for 50 years with an annual utility per capita of 10, and then 100 lives are created and live for 50 years with an annual utility per capita of 10.
All situations have a total utility of 100 k. The 1st does not involve saving nor creating lives, the 2nd saving lives, and the 3rd creating lives. However, I would say they are morally identical.
I may be missing something. Thoughts are welcome!
Richard Y Chappell @ 2022-12-16T01:42 (+4)
You may need to imagine yourself inside the world in order for person-directed reasons to get a grip. Suppose that you're at year 49, and can choose whether to realize the 2nd or 3rd outcome next year. That is, you can save everyone (for 50 more years), or you can let everyone die while setting in motion a "replacement" process (to create new people with 50 years of life). It seems to me that there's some moral reason to prefer the former option!
Vasco Grilo @ 2022-12-16T11:32 (+2)
Thanks for replying!
I can see why my intuitions would point to the 2nd option being better, but this can be explained by them not internalising the conditions of the thought experiment.
If I am at the end of year 49, and can choose whether to realise the 2nd or 3rd outcome after the end of year 50, I would intuitively think that:
- Years 51 to 100 would have the same utility (50 k) for both the 2nd and 3rd options.
- Year 50 would have less utility for the 3rd option than for the 2nd. This is because everyone would die at the end of year 50 in the 3rd option, and dying sounds intuitively bad.
However, the 2nd point violates the condition of equal utility. To maintain the conditions of the thought experiment, we would have to assume year 50 to contain the same utility in both options. In other words, the annual utility per capita of 10 would have to be realised every year, instead of simply being the mean annual of the 1st 50 years. To better internalise these conditions, we can say everyone would instantaneously stop to be alive (instead of dying) at the end of year 50 in the 2nd option. In this case, both options seem morally identical to me.
Thinking about it, one life could be described as a sequence of moments where one instantaneously stops to be alive and then is created.
Richard Y Chappell @ 2022-12-16T14:11 (+4)
one life could be described as a sequence of moments where one instantaneously stops to be alive and then is created
I do think a key issue here is whether or not that's the right way to think about it. As I wrote in the 'Personal Identity' chapter of Parfit's Ethics (2021):
If our future selves are better regarded as entirely new people, there would seem no basis for distinguishing killing from failing to bring into existence. You would have to reconceive of guns as contraceptive agents. Nobody survives the present moment anyway, on this view, so the only effect of lethally shooting someone would be to prevent a new, qualitatively similar person from getting to exist in the next moment. Not so bad!
That's an extremely revisionary claim, and not one I think we should accept unless it's unavoidable. But it is entirely avoidable (even on a Parfitian account of personal identity). We can instead think that psychological continuants of existing persons have an importantly different moral status, in prospect, from entirely new (psychologically disconnected) individuals. We may think we have special reasons, grounded in concern for existing individuals, to prefer that their lives continue rather than being replaced -- even if this makes no difference to the impersonal value of the world.
That said, if your intuitions differ from mine after considering all these cases, then that's fine. We may have simply reached a bedrock clash of intuitions.
Vasco Grilo @ 2022-12-16T14:57 (+2)
Thanks for sharing.
If our future selves are better regarded as entirely new people, there would seem no basis for distinguishing killing from failing to bring into existence. You would have to reconceive of guns as contraceptive agents. Nobody survives the present moment anyway, on this view, so the only effect of lethally shooting someone would be to prevent a new, qualitatively similar person from getting to exist in the next moment. Not so bad!
I get the point, but the analogy is not ideal. To ensure total utility is similar in both situation, I think we should compare:
- Doing nothing.
- Killing and reviving someone who is in dreamless sleep.
- Killing one person while reviving another may lead to changes in total utility, so it does not work so well as a counterexample in my view.
- Being killed and revived while awake would maybe feel strange, which can also change total utility, so an example with dreamless sleep helps.
- Ideally, the moments of killing and reviving should be as close in time as possible. The further apart, the more total utility can differ. Dreamless sleep also helps here, because the background stream of thought is more constant. If I was instantly killed while sitting at the sofa at home with some people around me, and then instantly revived a few minutes later, I may find myself surrounded by worried people. This means the total utility may well have changed.
Saying the 2 situations above are similar does not sound revisionary to me (assuming we could ensure with 100 % certainty that the 2nd one would work).
That said, if your intuitions differ from mine after considering all these cases, then that's fine. We may have simply reached a bedrock clash of intuitions.
Likewise, and thanks for engaging!
Richard Y Chappell @ 2022-12-16T15:06 (+4)
One quick clarification: If someone is later alive, then they have not previously been "killed", as I use the term (i.e. to mean the permanent cessation of life; not just temporary loss of life or whatever). I agree that stopping someone's heartbeat and then starting it again, if no harm is done, is harmless to that individual. What I'm interested in here is whether permanently ending someone's life, and replacing them with an entirely new (psychologically disconnected) life, is something we should regard negatively or with indifference, all else equal.
Vasco Grilo @ 2022-12-16T16:15 (+4)
Ah, sorry, that makes sense. I can also try to give one example where someone dies permanently. For all else to be equal, we can consider 2 situations where only one person is alive at any given time (such that there are no effects on other persons):
- Word A contains 1 person who lives for 100 years with mean annual utility of 10.
- World B contains:
- 1 person X who lives for 50 years with mean annual utility of 10, and then instantly dies.
- 1 person Y who is instantly created when person X instantly dies, and then lives for 50 years with mean annual utility of 10.
Both worlds have utility of 1 k, and feel equally valuable to me.
Brian E Adams @ 2022-12-19T05:26 (+3)
I don't understand the positive duty to procreate which seems to be an accepted premise here?
Morality is an adverb not an adjective.
Is a room of 100 people 100x more "moral" than a room with 1 person. What's wrong with calling that a morally neutral state? (I'm not totalling up happiness or net pleasure or any of that weird stuff).
Only when forced into a trolley problem when we have actual decisions do our decisions, e.g. kill 1 person or 100 people, does the number of people have significance.
Richard Y Chappell @ 2022-12-19T14:11 (+5)
I don't think there's a "duty to procreate". I wrote that "There's always some reason to positively benefit possible future people by bringing them into a positive existence, even though it doesn't wrong anyone to remain childless by choice." In other words: it's a good thing to do, not a duty. Some things are important, and worth doing, even though morally optional.
Is a world containing happy lives better than a barren rock? As a staunch anti-nihilist, I think that good lives have value, and so the answer is 'yes'.
Note that I wouldn't necessarily say that this world is "more moral", since "moral" is more naturally read as a narrow assessment of actions, rather than outcomes. But we should care about more than just actions. The point of acting, as I see it, is to bring about desirable outcomes. And I think we should prefer worlds filled with vibrant, happy lives over barren worlds. That's not something I'm arguing for here; just a basic premise that I think is partly constitutive of having good values.
Brian E Adams @ 2022-12-20T00:24 (+3)
I think I understand and that makes sense to me.
rootpi @ 2022-12-16T12:49 (+3)
Hi Richard - this all makes a lot of sense. Gustav Alexandrie and I have a model of 'perspective-weighted utilitarianism' which also puts intermediate weight on potential people and has some of the same motivations / implications. I presented it at the June GPI workshop and would be happy to discuss.
-Julian
Richard Y Chappell @ 2022-12-16T13:57 (+3)
Sounds great! If/when you have a public draft available, please do share the link!
Geoffrey Miller @ 2022-12-15T20:44 (+2)
Richard - interesting post. I think this hybrid approach seems more or less reasonable.
I do think the dichotomy between 'person-directed' and 'undirected' concerns is a bit artificial, and it glosses over some intermediate cases in ways that over-simplify the population ethics.
Specifically, any given couple considering whether to have children, or whether to allow a particular fetus to reach term (versus aborting it), is not exactly facing a dilemma about a currently existing person with particular traits -- but they aren't exactly facing an 'undirected' dilemma about whether to add another vague abstract genetic person to the total population either.
Rather, they're facing a decision about whether to create a person who's likely to have certain heritable traits that represent some Mendelian combination of their genes. They're facing a somewhat stochastic distribution of possible traits in the potential offspring, and that complicates the ethics. But when assessing the value of any existing life (e.g. the kid at risk of malaria who might live another 50 years if they survive), we're also facing a somewhat stochastic distribution of possible future traits that might emerge decades in the future.
In other words, pace Derek Parfit, there might be almost as much genetic and psychological continuity between parent and child as between person X at time Y and person X at time Y + many decades. In neither case does the 'person-directed' thinking quite capture the malleable nature of human identity within lives and across generations.
dstudioscode @ 2023-11-22T09:50 (+1)
Does a positive obligation exist to procreate?
While controversies surround total utilitarianism and the Repugnant Conclusion, what about the ethical implications of sperm donation? Given that it typically entails negligible costs and results in creating content lives in developed nations, could sperm donation be considered a moral duty? Despite concerns about overpopulation and its impact on climate change, could individual actions be akin to a Prisoner's Dilemma, where meaningful change requires large-scale government intervention and individual actions do not matter at all on a large scale?
Regarding meat consumption, when does the act of creating life outweigh the potential for negative consequences, such as dietary choices? If refraining from creating life is justified on the basis of potential meat consumption (as seen in vegan antinatalist perspectives), does it logically follow that it is morally acceptable to kill non-vegans due to their meat consumption?
Finally, you said that saving a life is more important than creating one, though creating one has some relevance. So how many lives created is equal to one life saved? What is the break-even point?
Thanks.
MattBall @ 2022-12-16T14:32 (+1)
Thanks for this, Richard. Very thoughtful.
However, after being a ~total utilitarian for decades, I've come to realize it is beyond salvage. As I point out in the chapter "Biting the Philosophical Bullet" here.
Take care!