Consequentialism and Cluelessness
By Richard Y Chappellđ¸ @ 2022-10-17T18:57 (+32)
This is a linkpost to https://rychappell.substack.com/p/consequentialism-and-cluelessness
TL;DR: Invisible high stakes donât undermine ordinary expected value verdicts. And even if they did, that wouldnât undermine consequentialism because the question of what fundamentally matters is epistemically prior to the question of whether we can reliably track it. Moreover, one cannot plausibly deny that invisible consequences still matter, in principle.
James Lenmanâs âConsequentialism and Cluelessnessâ presents an influential epistemic argument against consequentialism. Roughly:
- Weâve no idea what the long-term ramifications of any of our actions will be.
- So weâve no idea what consequentalist reasons for action we have.
- But an adequate ethical theory must guide us.
So: C. Consequentialism is not an adequate ethical theory.
I think each of those premises is probably false (especially the last two).
Indecipherable clues
1. Longtermist Clues
Though I wonât dwell on the point here, longtermists obviously believe that there are at least some high-impact actions where we can be reasonably confident that they will improve the long-term future. Examples might include (i) working to avert existential risk, (ii) moral circle expansion and other efforts to secure âmoral progressâ by improving society-wide ethics, and (iii) generally improving civilizational capacities (through education, economic growth, technological breakthroughs, etc.), in ways that donât directly increase existential risks.
But in what follows, Iâll put such cases aside and focus on ordinary acts (e.g. saving a childâs life) with only short-term foreseeable effects, and unknowable long-term causal ramifications (for familiar reasons to do with the extreme fragility of who ends up being conceived, such that even tiny changes may presumably ripple out and completely transform the future population).
2. Defending the Expected Value Response
The obvious response to cluelessness worries is to move to expectational consequentialism: if weâve no idea what the long-term consequences will be, then these âinvisibleâ considerations are (given our evidence) simply silentâspeaking neither for nor against any particular option. So the visible reasons will trivially win out. For example, saving a childâs life has an expected value of one life saved, and pointing to our long-term ignorance doesnât change this.
Lenman is unimpressed with this response, but the four reasons he offers (on pp. 353 - 359) strike me as thoroughly confused.
First, he suggests that expectational consequentialists must rely upon some controversial probabilistic indifference principles (coming up with a principled way of partitioning the possibilities, and then assigning equal probability to each one), whereas it seems to me that no work at all is required because no competing reasons have been offered.
Perhaps the thought is that speculative long-term ramifications could be produced to count against the expected value of saving the child. (Like, âWhat if the child turns out to be an ancestor of future-Hitler?â) In response, the agent may say, âThatâs no reason at all unless you can show that the future risk is greater if I perform this act than if I donât.â Why is the burden on the consequentialist agent to refute such utterly baseless speculation? I donât think I need to commit to any particular principle of indifference in order to say that I havenât yet been presented with any compelling reason to revise my expected value estimate of +1 life saved.
[Update: Hilary Greaves offers the stronger response that some restricted principle of indifference seems clearly warranted in these cases, notwithstanding whatever problems might apply to a fully general such principle. Whereas I've argued that it's surely defensible to take EV to be unaffected by simple cluelessness, Greaves argues that it's plausibly rationally mandatory. It would seem completely crazy to have asymmetric expectations in such cases, after all.]
Second, Lenman assumes that, against the background of astronomical invisible stakes, the visible reason to save a life must be, for consequentialists, âextremely weakââmerely âa drop in the oceanâ. But why the focus on relative stakes? In absolute terms, saving a life is incredibly important. The presence of even greater invisible stakes doesnât change the absolute weight of this reason in the slightest.
Perhaps Lenman is thinking that the strength of a consequentialist reason must be proportionate to the actionâs likelihood of serving the ultimate goal of maximizing overall value. Since the value of one life is vanishingly unlikely to sway the scales when comparing the long-term value of each option, to save one life can only be an âextremely weakâ reason to pick one option over another. But the assumption here is simply false. The strength of a consequentialist reason is given by its associated (expected) value in absolute terms: the size of the drop, not the size of the ocean.
Third, Lenman objects:
It is surely a sophistry to treat a zero expected value that reflects our knowledge that an act will lack significant consequences as parallel in significance to one that reflects our total ignorance of what such consequences (although we know they will be massive) will be.
Like the separateness of persons objection, this mistakenly assumes that anything significant must result in changes to our verdicts about acts, when often fitting attitudes are better suited to reflect such significance. Consider: we obviously should feel vastly more angst / ambivalenceâand strongly wish that more info was availableâin the âtotal uncertaintyâ case than in the âknown zeroâ case. Why isn't that a sufficient difference in âsignificanceâ? I don't see any reason here to think that it calls for a different decision to be made (assuming that no feasible investigative options are available; in practice, of course, the astronomical stakes instead motivate at least attempting longtermist investigation).
Fourth and finally, Lenman raises the possibility that perhaps some (less significant) acts may avoid having radical causal ramifications, resulting in non-uniform âscaling downâ of our moral reasons, which would be awkward (absurdly yielding stronger consequentialist reasons to do more trivial acts). But again, as stressed in #2 above, there should be no âscaling downâ at allâthat suggestion rested on a total misunderstanding of the reasons posited by any sensible consequentialism.
Wrapping up: Why trust expected value?
Perhaps the heart of Lenmanâs objection can be restated as a challenge: given astronomical invisible stakes, why trust visible expected value in the slightest? Thereâs vanishingly small reason to think that the EV-maximizing act is also the value-maximizing act, and surely what consequentialists ultimately care about is actual value rather than expected value.
But I think this misses the point of being guided by expected value. As Frank Jackson stressed in his paper on âDecision-Theoretic Consequentialismâ, in certain risky cases we may know that a âsafeâ option will not maximize value, yet it may nonetheless maximize expected value (if the alternatives risk disaster), and is for that very reason the prudent and rational choice. In other cases, we may be required to give up a âsure thingâ for a slight chance of securing a vastly better outcomeâeven if the outcome will then be almost certainly worse. So the point of being guided by expected value is not to increase our chance of doing the objectively best thing, nor to make a good result highly likely.
Itâs difficult to express precisely what the point is. But roughly speaking, itâs a way to promote value as best we can given the information available to us (balancing stakes and probabilities). And one important feature of maximizing expected value is that we cannot expect any subjectively-identifiable alternative to do better in the limit (that is, imagining like decisions being repeated a sufficient number of times, across different possible worlds if need be), at least for object-given reasons.1 After all, if there were an identifiably better alternative, it would maximize expected value to follow it. And if Lenmanâs critique were accurate, it would imply not that expected value is untrustworthy, but rather that (contrary to initial appearances) saving a life lacks positive expected value after all.
Put this way, I donât think it makes sense to question our trust in expected value. (Itâs the practical analogue of asking, âWhy believe in accordance with the evidence, when evidence can be misleading?â In either case, the answer to âWhy be rational?â is just: itâs the best we can non-accidentally do!) If the question is instead asked, âWhy think that saving a life has positive expected value?â then I just point to the prior section of this blog post. (In short: why not? Itâs visibly positive, and invisible considerations can hardly be shown to count against it!)
I get that cluelessness in the face of massive invisible long-term stakes can be angst-inducing. It should make us strongly wish for more information, and motivate us to pursue longtermist investigation if at all possible. But if no such investigations prove feasible, we should not mistake this residual feeling of angst for a reason to doubt that we can still be rationally guided by the smaller-scale considerations that we do see. To undermine the latter, it is not enough for the skeptic to gesture at the deep unknown. Unknowns, as such, are not epistemically undermining (greedily gobbling up all else that is known). To undermine an expected value verdict, you need to show that some alternative verdict is epistemically superior. Proponents of the epistemic objection (like skeptics in many other contexts)2 cannot do this.
3. The Possibility of Moral Cluelessness
Suppose Iâm wrong about all of the above, and in fact we have no reason at all to think that saving a childâs life does more good than harm (or is positive in expectation). That would be a sad situation. But it hardly seems kosher to infer from this that doing good isnât what matters. Thereâs no metaphysical guarantee that weâre in a position to fruitfully follow moral guidance.
Itâs surely conceivable that some agents (in some possible worlds) may be irreparably lost on practical matters. Any agents in the benighted epistemic circumstances (of not having the slightest reason to think that any given action of theirs will be positive or negative on net) are surely amongst the strongest possible candidates for being in this deplorable position. So if we conclude (or stipulate) that we are in those benighted epistemic circumstances, we should similarly conclude that we are the possible agents who are irreparably practically lost.
To suggest that we instead revise our account of what morally matters, merely to protect our presumed (but unearned) status as not totally at sea, strikes me as a transparently illegitimate use of âreflective equilibriumâ methodologyâakin to wishfully inferring that causal determinism must be false on the basis of incompatibilism plus a belief in free will.
Sometimes inferences are directionally constrained by considerations of epistemic priority: Against a backdrop of incompatibilism, you can infer âno free willâ from causal determinism, but not âno causal determinismâ from free will. The question whether causal determinism is true is epistemically prior to the question whether (given incompatibilism) we have free will. In a similar way, I suggest, the question of what morally matters is clearly epistemically prior to the question of whether we have epistemic access to what morally matters. To instead let the latter question settle the former strikes me as plainly perverse.
Ethics and What Matters
So what does matter? To prevent cluelessness from becoming a puzzle for everyone, Lenman suggests that non-consequentialist agents âshould ordinarily simply not regard [invisible consequences] as of moral concern.â This seems crazy wrong to me.
Suppose youâre given a magic box from the Gods, and told only that if you open it, one of two things will happen: either (a) it will cause a future holocaust, or (b) it will prevent a future holocaust. Lenmanâs view seems to be that you should regard this whole turn of events as a matter of indifference. I think itâs much more plausible that you should care greatly about which outcome eventuates, and so naturally feel immense angst over the whole thing. Given your ineradicable cluelessness about the outcomes, the box doesn't affect what actions you should perform. But it surely is a matter of concern!
You should, for example, strongly wish that you had more info about which outcome would result from opening the box. Why would this be so, if invisible consequences were âsimply not⌠of moral concernâ? I think we should prefer that invisible consequences be rendered visible, precisely because (i) this would help us to bring about better ones, and (ii) we should care about that.
In a confusing passage, Lenman acknowledges that invisible consequences matter, just not morally:
Of course, the invisible consequences of action very plausibly matter too, but there is no clear reason to suppose this mattering to be a matter of moral significance any more than the consequences, visible or otherwise, of earthquakes or meteor impacts (although they may certainly matter enormously) need be matters of, in particular, moral concern. There is nothing particularly implausible here. It is simply to say, for example, that the crimes of Hitler, although they were a terrible thing, are not something we can sensibly raise in discussion of the moral failings or excellences of [someone who saved the life of Hitlerâs distant ancestor].
This is a strange use of âmoral significanceâ. Moral agents clearly ought to care about earthquakes, meteor strikes, and future genocidal dictators. (At a minimum, we ought to prefer that there be fewer of such things, as part of our beneficent concern for others generally.) An agent who was truly indifferent to these things would not be a virtuous agent: their indifference reveals a callous disregard for future people. So it could certainly constitute a âmoral failingâ to fail to care about such harmful events.
On the other hand, if Lenman really just means to say that whether unforeseeable consequences eventuate as a matter of fact shouldnât affect our assessment of a personâs âmoral failings or excellencesâ, then this seems a truism that in no way threatens consequentialism. Itâs a familiar point that many forms of agential assessment (e.g. rationality, virtue, etc.) are âinternalistââsupervening on the intrinsic properties of the agent, and not what happens in the external world, beyond their control. While Iâve long been frustrated that other consequentialists tend to downplay or neglect this, and think that saying plausible things here requires going beyond âpure consequentialismâ in some respects (we need to make additional claims about fitting attitudes, for example), these additional claims are by no means in conflict with the core claims of pure consequentialism. So there really isnât any problem hereâat least, none that canât easily be fixed just by saying a bit more.
Conclusion
Iâve argued that the cluelessness objection is deeply misguided. Invisible high stakes donât undermine ordinary expected value verdicts. And even if they did, that wouldnât undermine consequentialism because the question of what fundamentally matters is epistemically prior to the question of whether we can reliably track it. Lenmanâs non-consequentialist alternative proposal seems vicious, unless interpreted so narrowly that the relevant claim becomes trivial, and compatible with expectational consequentialism all along.3
Footnotes
1 Cf. an evil demon threatening to blow up the world if you use expected value as a decision procedure. We can bracket such âstate-givenâ reasons for present purposes, as they arenât relevant to the question of whether EV is a rational decision-procedure. The evil demon case is simply one of Parfitian ârational irrationalityâ.
2 I raise a similar objection to Sharon Streetâs âmoral lotteryâ objection to moral realism.
3 Thanks to participants in the âCluelessnessâ reading group at GPI last week, for helpful discussion.
MichaelStJules @ 2022-10-20T03:07 (+6)
I think I basically agree with all your responses, but I also think this misses a more important case of cluelessness, specifically complex cluelessness. Saving a child has impacts on farmed animals, wild animals, economic growth and climate change, some of which could be negative and some of which could be positive. How do you weigh them all together non-arbitrarily to come to the verdict that it's definitely good in expectation, or to the verdict that it's definitely bad in expectation? This isn't a case of having no reasons either way (or all reasons pointing in one direction), but of having important reasons each way that are too hard to weigh against one another in a way that's justified, non-arbitrary and defensible.
MichaelStJules @ 2022-10-20T19:59 (+4)
It would also be surprising for the direct effects on the child to be a tie-breaker if you have precise probabilities, given how much more is at stake.
Richard Y Chappell @ 2022-10-21T18:17 (+3)
Seems natural to just go meta, treating the hard-to-assess determinants of expected value as akin to hard-to-discover empirical facts, and maximizing meta-expected value as one's "best attempt" to manage this additional uncertainty.
I'm less sure about this, but it seems like the defense of EV against simple cluelessness could carry over to defend meta-EV against complex cluelessness? E.g. in the long run (and across relevant possible worlds), we'd expect these agents to do better on average than agents following any other subjectively-accessible decision procedure.
MichaelStJules @ 2022-10-22T18:25 (+10)
I'm not sure what you mean by maximizing meta-expected value. How is this different from just maximizing expected value?
I'd claim that the additional uncertainty is unquantifiable, or at least no single set of precise probabilities (a single precise probability distribution over outcomes for each act) can be justified over all other alternatives. There's sometimes no unique best attempt, and no uniquely best way to choose between them or weigh them. Sometimes there's no uniform prior, and sometimes there are infinitely many competing candidates that might be called unform, because of different ways to parametrize your distribution. At the extreme for an idealized rational agent, you need to have a universal prior, but there are multiple, and they depend on arbitrary parametrizations. How do you pick one over all others?
I do think itâs possible we arenât always clueless, depending on what kinds of credences you entertain.
FWIW, my preferred approach is something like this, although maybe we can go further: https://forum.effectivealtruism.org/posts/Mig4y9Duu6pzuw3H4/hedging-against-deep-and-moral-uncertainty
It builds on https://academic.oup.com/pq/article-abstract/71/1/141/5828678
Also this might be useful in some cases: https://forum.effectivealtruism.org/posts/f4sep8ggXEs37PBuX/even-allocation-strategy-under-high-model-ambiguity
Holly Morgan @ 2022-10-18T13:18 (+5)
Strong upvoting because I think these are good, important points with an excellent TL;DR and title.
As Frank Jackson stressed in his paper on âDecision-Theoretic Consequentialismâ, in certain risky cases we may know that a âsafeâ option will not maximize value, yet it may nonetheless maximize expected value (if the alternatives risk disaster), and is for that very reason the prudent and rational choice.
I suspect a lot of the 'systemic change' critique of donating to the Against Malaria Foundation is motivated by this kind of thinking. You'll often hear people say something like, "Bed-nets alone will never eliminate poverty and injustice!" as if accepting that claim would entail that buying bed-nets is worse than taking action that has a (more plausible) shot at transforming the entire system. Maximising expected value does not always mean maximising the chance of a perfect world.
(I also think that sometimes the reasoning in these cases is something closer to rule consequentialism, which I have more sympathy for. And I'm sure sometimes they're also using expected value, just plugging in different numbers.)
I get that cluelessness in the face of massive invisible long-term stakes can be angst-inducing.
The feeling I struggle with the most here is paralysis in the face of a seemingly relentless string of crucial considerations flipping the sign of the value of the path I'm on. (There's a great line in the Zhuangzi that captures this nicely for me: "Confucius went along for sixty years and transformed sixty times. What he first considered right he later considered wrong. He could never know if what he presently considered right were not fifty-nine times wrong.") Your arguments still work in such cases - there's still no need for paralysis, but emotionally speaking it's very tempting!