In Defense of Stakes-Sensitivity
By Richard Y Chappellđ¸ @ 2025-10-19T21:35 (+16)
This is a linkpost to https://www.goodthoughts.blog/p/in-defense-of-stakes-sensitivity
TL;DR: Stakes-sensitive beneficence is compatible with individually-directed concern (even for future individualsâsee fn 2). Also, we can have reasons to optimize that are decisive for good practical reasoning even if we do not call the resulting act a matter of âdutyâ. Good people are concerned to act well, not just to discharge their duties.
In âThe Case for Strong Longtermismâ, Greaves & MacAskill appeal to what we may call the Stakes-Sensitivity Principle (SSP): âWhen the axiological stakes are very high, there are no serious side-constraints, and the personal prerogatives are comparatively minor, one ought to choose a near-best option.â
This is an extremely modest principle. Consider that moderate deontology allows that even the most serious side-constraints can be overridden when the stakes are sufficiently high. SSP does not even claim so much. It is compatible with outright absolutism.
Even so, Charlotte Unruhâs âAgainst a Moral Duty to Make the Future Go Bestâ disputes this modest principle, which she characterizes as âconstrained utilitarianismâ (I prefer the term âbeneficentrismâ, as the ideaâs appeal is by no means limited to utilitarians). Unruh writes:
On this view, demands of beneficence always generate moral duties to aid, unless these duties are outweighed by a prerogative.
Deontologists can reject this characterization of beneficence. They can hold, against [SSP], that we have no general duty to make things go best from an impartial perspective. The deontological duty of beneficence is limited in principle, or, as Wiggins puts it, âby its true natureâ.[1]
Wiggins, in âThe Right and the Good and W. D. Rossâs Criticism of Consequentialismâ, argues that Rossâs conception of beneficence as a general duty to promote intrinsic value (all else equal) is in tension with Rossâs own insight that âthe essential defect of the âideal utilitarianâ theory is that it ignores, or at least does not do full justice to, the highly personal character of duty.â Wiggins suggests, plausibly enough, that it is not just the other Rossian prima facie duties but beneficence too that should be understood as âpersonalâ in character:
The beneficent person is one who helps X, or rescues Y, or promotes this or that cause and does so because each of those thing is in its own way an important and benevolent end. His acts are not directed at simply increasing the net quantity of intrinsic good in the world. (p. 274)
I couldnât agree more! This is precisely the sort of view I defend in âValue Receptaclesâ and âThe Right Wrong-Makersâ: the utilitarianâs fundamental reasons (and hence fitting motivations) are not about âpromoting valueââthatâs just a kind of summary criterionâbut rather concern promoting the interests of concrete individuals. Crucially, as I argue in those papers, thatâs all perfectly compatible with adjudicating tradeoffs between competing interests by giving each its due weight and optimizing accordingly. Caring equally about each separate individual, and making rational tradeoffs, will naturally result in your preferring and choosing the outcome that can be characterized as âmaximizing valueâ. But thatâs quite different (and psychologically distinguishable) from merely caring about âthe net quantity of intrinsic good.â To insist that maximizing well-being entails such merely abstract concern rather than caring about concrete individuals is a (now-refuted) ideological prejudice.[2]
Wiggins continues (p. 275):
We can see it [beneficence] as a schema that generates countless more specific, as Ross would say prima facie, duties most of which arise (often in a supererogatory or non-mandatory manner) from the agentâs historic situation, arise from who he is, arise from who his putative beneficiary is, or arise from what goals he has already committed himself to promoting (say, education or music or whatever).
As indicated above, Iâm on board with the âschematicâ interpretation of beneficence. But no supporting argument is offered for the limiting âmost of which ariseâŚâ clause. Should we really think that some âputative beneficiariesâ just donât matter, such that a moral agent has no pro tanto reason whatsoeverâeven when all else truly is equalâto benefit them or wish them well? Consider what that would mean. Suppose the individual in question were drowning in Singerâs pond. And our agentâs finger were resting on a button that would activate a robot lifeguard to rescue the drowning person. Would it really be decent for our moral agent to lift their finger without pressing the button, and justify themselves by saying, âWell, itâs not as though this would advance the cause of music, which is what Iâm personally committed to promoting.â
Surely it is vastly more credible (and decent) to grant that each personâs interests give rise to at least some pro tanto reasons of beneficenceâto wish them well, and assist them when doing so is costless (or sufficiently low cost, including consideration of opportunity costs), etc. If you want to add a prerogative to favor causes associated with personal interests to some extent, feel free. But such prerogatives, like moderate constraints, are surely stakes-sensitive. If we modify the previous case to add an alternative button that somehow supports the arts, and stipulate that only the first-pressed button has any effect, some may judge that it is now permissible to let the person drown and support the arts instead. But if we next suppose that the life-saving button does not just save one life, but rather averts an entire genocide (or comparable harms from natural disasters), then it would once again be clearly indecent to refrain from picking the vastly better option.
Let us return to Unruh. She wants to defend an incredibly lax view of beneficence:
As I understand the view that the duty of beneficence is limited, prerogatives do not serve to protect agents from the demands of a general and unlimited duty to do good. Rather, prerogatives allow moral agents to exercise latitude in aspects of their lives that have to do with aiding others.
The difference between these two approaches is clearest when we consider costless opportunities to do (more) good, like my first Button case. I can understand seeing a role for prerogatives to limit our exposure to burdensome demands. I cannot begin to fathom a moral perspective on which we should have âlatitudeâ to gratuitously let others die, or otherwise make the world worse than is effortlessly within our grasp.[3]
Unruh may[4] allow that we do have a kind of pro tanto reason to maximize the good (in the schematic sense!), but it is a non-requiring (or merely âjustifyingâ) reason. To understand the significance of the requiring / non-requiring distinction, more needs to be said about what moral ârequirementâ signifies. Moral philosophers standardly proceed as though itâs clear that everyone in the field uses the term to pick out the same concept, but I think this is far from clear. Further, on what I take to be the best understanding of the concept, it is a kind of normative error to discount the force of a reason just because it is ânon-requiringâ. That is, whereas it is an implicit presupposition of Unruhâs paper (like many others) that normative inquiry should be focused on questions of âdutyâ, I think a good person engaging in moral deliberation will set their sights higher than just the absolute minimum that they can get away with. And I take it that SSP is just completely indisputable when concerned with the sense of âoughtâ that guides this more ambitious form of moral deliberation that good people are inclined to engage in.
That is, it is clearly morally better to do more good rather than less, all else equal, whether or not you want to call this a âdutyâ. Unruh anticipates such a response:
A final defence of the Stakes Sensitivity Argument is that its conclusion should be understood in a weak sense. Even if we are not required to do what is best, it is still better to do more long-term good than less near-term good. However, if self-sacrifice for the present generation and self-sacrifice for future generations both lie beyond the call of duty, it is unclear exactly what difference future stakes make. Saving resources for the future seems to become one of many ways in which we can discharge our imperfect duty of beneficence.
I find this passage very puzzling. The point of Greaves & MacAskillâs argument is to support deontic strong longtermism, i.e. that âin the most important decision situations facing agents today, one ought to choose an option that is near-best for the far future.â This is, to many, a surprising and highly practically significant result, even if the âoughtâ in question is the ought of ideal practical reason rather than the ought of duty. Many assume that we have more reason to favor near-term interests, and I take the stakes-sensitivity argument (combined with axiological strong longtermism) to show that this common assumption is mistaken.
Longtermism is not just âone of many ways in which we can discharge our imperfect duty of beneficence,â because discharging our imperfect duty is not all that matters, and not what would guide a virtuous person in their practical deliberations in high-stakes contexts. A virtuous person has greater moral ambition and scrupulosity (as do a great many ordinary people who are, in this respect, at least more virtuous than those who lack this aspiration). They are concerned to make a morally near-best decision in high-stakes contexts, e.g. regarding how to direct their philanthropic resources, and this is a very good and fitting concern to have! Crucially, this is not just the narrow question of what is axiologically best, since that might neglect some important non-consequentialist considerations (if you believe in such things). Rather, it addresses the theory-neutral question of moral ambition: what is truly choice-worthy, all things considered with due weight.
- ^
I think Unruhâs quotation of Wiggins here is misleading. Wiggins revises Rossâs conception of beneficence in two ways, which he explicitly separates on p. 276. The first is personalizing it, as I discuss in the main post, to make it ânot the same as the requirement to âmake the world a better placeâ.â This is the revision that is relevant to Unruhâs purposes. The second revision is to claim that âBeneficence is by its true nature restricted,â by which he means that we do not have outweighed reasons but rather no reasons of beneficence at all to engage in (long-term optimal) atrocities: Wigginsâs reasons of beneficence are, in effect, conditional on not violating prior duties. But since SSP is limited to cases in which there are no serious side-constraints, it is transparently compatible with this ârestrictedâ conception of beneficence.
- ^
I think this is true even regarding future persons. Quick proof: there is nothing objectionably âabstractâ about our reasons of non-maleficence to avoid bringing miserable lives into existence. When we successfully act on such a reason, we may do so âfor the sake of the possible future person in questionâ (in the relevant, perhaps somewhat loose, sense), even though our success means that the âperson in questionâ never actually exists. So, in exactly the same way, there is no principled barrier to our likewise having reasons of beneficence that speak in favor of bringing happy lives into existence, for the sake (in the relevant, perhaps somewhat loose, sense) of those very individuals.
- ^
Unruh expands: âOn the view that duties of beneficence are imperfect, agents should aid others often enough. However, there is no general duty to choose an option that is best for others, even when choosing this option is not particularly costly for agents.â Again, if you replace ânot particularly costlyâ with âliterally costless in every wayâ, it better brings out the in-principle irrationality of the view.
- ^
Though at one point she asserts that âthe strength of justifying reasons to aid plausibly depends on factors such as our commitments, intentions, and relationships to present and future people, in addition to or instead of the number of interests at stake.â If we opt for the more moderate âin addition toâ (rather than the implausibly extreme âinstead ofâ) reading, then I think what Unruh is pointing to here are just further reasonsâstemming from special relationships and suchâwhich can be understood as distinct from (and in no way undermining of) our stakes-sensitive impartial reasons of beneficence.
Jakob Lohmar @ 2025-10-20T10:51 (+3)
I couldn't agree more. Moral philosophers tend to distinguish the 'axiological' from the 'deontic' and then interpret 'deontic' in a very narrow way, which leaves out many other (in my opinion: more interesting) normative questions. This is epistemically detrimental, especially when combined with the misconception that 'axiology is only for consequentialists'. It invites flawed reasoning of the kind: "consideration X may be important for axiology but since we're not consequentialists, that doesn't really matter to us, and surely X doesn't *oblige* us to act in a certain way (that would be far too demanding!), so we don't need to bother with X".
That said, I think there is still a good objection to the stakes-sensitivity principle, which is from Andreas Mogensen: full aggregation is true when it comes to axiology ('the stakes'), but it arguably isn't true with regard to choiceworthiness/reasons. Hence, it could be that an action has superb consequences, but that only gives us relatively weak reason to perform the action. That reason may not be strong enough to outweigh other non-consequentialist considerations such as constraints.
Richard Y Chappellđ¸ @ 2025-10-20T13:06 (+3)
Thanks! You might like my post, 'Axiology, Deontics, and the Telic Question' which suggests a reframing of ethical theory that avoids the common error. (In short: distinguish ideal preferability vs instrumental reasoning / decision theory rather than axiology vs deontics.)
I wonder if it might also help address Mogensen's challenge. Full aggregation seems plausibly true of preferability not just axiology. But then given principles of instrumental rationality linking reasons for preference/desire to reasons for action, it's hard to see how full aggregation couldn't also be true with regard to choiceworthiness. (But maybe he'd deny my initial claim about preferability?)
Jakob Lohmar @ 2025-10-20T14:24 (+3)
Thanks - also for the link! I like your notion of preferability and the analysis of competing moral theories in terms of this notion. What makes me somewhat hesitant is that the objects of preferability, in your sense, seem to be outcomes or possible worlds rather that the to-be-evaluated actions themselves? If so, I wonder if one could push back against your account by insisting that the choiceworthiness of available acts is not necessarily a function of the preferability of their outcomes since... not all morally relevant features of an action are necessarily fully reflected in the preferability of its outcome?
But assuming that they are, I guess that non-consequentialists who reject full aggregation would say that the in-aggregate larger good is not necessarily preferable. But I'm not sure. I agree that this seems not very intutive.
Richard Y Chappellđ¸ @ 2025-10-20T15:43 (+3)
Right, so one crucial clarification is that we're talking about act-inclusive states of affairs, not mere "outcomes" considered in abstraction from how they were brought about. Deontologists certainly don't think that we can get far merely thinking about the latter, but if they assess an action positively then it seems natural enough to take them to be committed to the action's actually being performed (all things considered, including what follows from it). I've written about this more in Deontology and Preferability. A key passage:
If you think that other things besides impartial value (e.g. deontic constraints) truly matter, then you presumably think that moral agents ought to care about more than just impartial value, and thus sometimes should prefer a less-valuable outcome over a more-valuable one, on the basis of these further considerations. Deontologists are free to have, and to recommend, deontologically-flavored preferences. The basic concept of preferability is theory-neutral on its face, begging no questions.
Jakob Lohmar @ 2025-10-20T15:58 (+3)
Yeah that makes sense to me. I still think that one doesn't need to be conceptually confused (even though this is probably a common source of disagreement) to believe both that (i) one action's outcome is preferable to the other action's outcome even though (ii) one ought to perform the latter action. For example, one might think the former outcome is overall preferable because it has much better consequences. But conceptual possibility aside, I agree that this is a weird view to have. At the very least, it seems that all else equal one should prefer the outcome of the action that one takes to be the most choiceworthy. Not sure if it has some plausibility to say that this doesn't necessarily hold if other things are not equal - such as in the case where the other action has the better consequences.
Richard Y Chappellđ¸ @ 2025-10-20T16:06 (+3)
My main puzzlement there is how you could think that you ought to perform an act that you simultaneously ought to hope that you fail to perform, subsequently (and predictably) regret performing, etc. (I assume here that all-things-considered preferences are not cognitively isolated, but have implications for other attitudes like hope and regret.) It seems like there's a kind of incoherence in that combination of attitudes, that undermines the normative authority of the original "ought" claim. We should expect genuinely authoritative oughts to be more wholeheartedly endorsable.
Jakob Lohmar @ 2025-10-20T16:21 (+1)
That seems like a strange combination indeed! I will need to think more about this...