Refusing to Quantify is Refusing to Think (about trade-offs)
By Richard Y Chappellđ¸ @ 2024-11-18T18:03 (+46)
This is a linkpost to https://www.goodthoughts.blog/p/refusing-to-quantify-is-refusing
TL;DR: rough estimates are better than no estimates. Refusals to quantify often hide that one is implicitly (and unjustifiably) counting some interests for zero.
Introduction
Inspired by Benthamâs Bulldog, I recently donated $1000 to the Shrimp Welfare Project. I donât know that itâs literally âthe best charityââlongtermist interventions presumably have greater expected valueâbut I find it psychologically comforting to âdiversifyâ my giving,[1] and the prospect of averting ~500 hours[2] of severe suffering per dollar seems hard to pass up. If you have some funds available that arenât otherwise going to an even more promising cause, consider getting in on the #shrimpact!
The fact that most people would unreflectively dismiss shrimp welfare as a charitable cause shows why effective altruism is no âtruismâ. Relatively few people are genuinely open to promoting the good (and reducing suffering) in a truly cause-neutral, impartial way. For those who are, we should expect the lowest-hanging fruit to be causes that sound unappealing. As a result, if someone gives exclusively to conventionally appealing causes, thatâs strong evidence that they arenât seriously trying to do the most impartial good. If youâre serious about doing more good rather than less, then you should be open to at least some weird-sounding stuff.[3]
And you should, of course, seriously try to do more good rather than less, at least some of the time, with some of your resources. (There are tricky questions about just how much of your time and resources should go towards optimizing impartial beneficence. But the correct answer sure ainât zero.)[4]
A bad objection
In the remainder of this post, I want to discuss a terrible objection that people commonly appeal to when trying to rationalize their knee-jerk opposition to âweirdâ EA causes (like shrimp welfare or longtermism).
âDifferent things canât be precisely quantified or comparedâ
This has got to be one of the most common objections to EA-style cost-effectiveness analyses, and it is so deeply confused. Oddly, I canât recall seeing anyone else explain why itâs so confused. (Quick answer: rough estimates are better than no estimates.)
The problem, in a nutshell, is that quantification enables large-scale comparison, and such comparison is needed in order to make high-stakes tradeoffs in an informed way. Tradeoffs, in turn, are essential to practical rationality. We canât avoid them: different values are in conflict, and canât all be jointly satisfied. We have to choose, or âtrade offâ, between them. The only question is how. We can do so openly and honestly, by seriously trying to assess their comparative value or importance. Or we can do so dishonestly, with our heads in the sand, pretending that one of the values doesnât have to be counted at all.
Now, when people complain that EA quantifies things (like cross-species suffering) that allegedly âcanât be precisely quantified,â what theyâre effectively doing is refusing to consider that thing at all. Because the realistic alternative to EA-style quantitative analysis is vibes-based analysis: just blindly going with whatâs emotionally appealing at a gut level. And many things that are difficult to precisely quantify (like the suffering of non-cute animals) lack emotional appeal. Theyâll be completely neglected in a vibes-based analysis. That is, in effect, to give them precisely zero weight.
To address the objection, consider the datum:
(Less Wrong): Itâs better to be slightly wrong than to be very wrong about moral weights and priorities.
Something I find frustrating is that many people seem to instead endorse:
(Ostrich Thinking): Itâs better to ignore a question than to answer it imperfectly.
Ostrich Thinking is very stupid unwise, because your unreflective assumptions could easily be even further from the truth than the imperfect answers you would reach by giving serious thought to a problem. Compared to ignoring numbers, even the roughest quantitative model or âback of the envelopeâ calculation can help us to be vastly less wrong.
âYour analysis requires a lot of assumptionsâŚâ
An especially popular form of Ostrich Thinking combines:
- Rational satisficing: the crazy view that thereâs no reason to do more good once youâve identified a âgood enoughâ option; and
- Certainty bias: preferring the near-certainty of some positive impact over an uncertain prospect with much greater expected value.
Combining these two bad views yields the result that you should definitely donate to a âsafeâ option like GiveWell-recommended charities, rather than longtermist or animal welfare causes that involve a lot more uncertainty.[5] This view might be expressed by saying something like, âPrioritizing X is awfully speculative / depends on a lot of questionable assumptionsâŚâ But itâs important to understand that this actually gets things backwards.
Firstly, note that we should not simply be aiming to do a little good with certainty. We should always prefer to do more good than less, all else equal; and we should tolerate some uncertainty for the sake of greater expected benefits. (Both rational satisficing and certainty bias are deeply unreasonable.) So, the question that properly guides our philanthropic deliberations is not âHow can I be sure to do some good?â but rather, âHow can I (permissibly) do the most (expected) good?â
You cannot offer an informed answer to this question without forming judgments on âspeculativeâ matters (from AI safety to insect sentience). This renders these topics puzzles for everyone. In order to be confident that global health charities are a better bet than AI safety or shrimp welfare, you need to assign negligible credence to the assumptions and models on which these other causes turn out to be orders of magnitude more cost-effective. Thatâs a big assumption! Itâs actually much more epistemically modest to say, âI split my credence across a wide range of possibilities, some of which involve so much potential upside that even moderate credence in them suffices to make speculative cause X win out.â
Conventional Dogmatism
Itâs worth reiterating this point, because even smart people often seem to miss it. Itâs very conventional to think, âPrioritizing global health is epistemically safe; you really have to go out on a limb, and adopt some extreme views, in order to prioritize the other EA stuff.â This conventional thought is false. The truth is the opposite. You need to have some really extreme (near-zero) credence levels in order to prevent ultra-high-impact prospects from swamping more ordinary forms of do-gooding. As I previously explained:
Itâs essentially fallacious to think that âplausibly incorrect modeling assumptionsâ undermine expected value reasoning. High expected value can still result from regions of probability space that are epistemically unlikely (or reflect âplausibly incorrectâ conditions or assumptions). If thereâs even a 1% chance that the relevant assumptions hold, just discount the output value accordingly. Astronomical stakes are not going to be undermined by lopping off the last two zeros.
Tarsneyâs Epistemic Challenge to Longtermism is so much better at this [than Thorstad]. As he aptly notes, as long as youâre on board with orthodox decision theory (and so donât disproportionately discount or neglect low-probability possibilities), and not completely dogmatic in refusing to give any credence at all to the longtermist-friendly assumptions (robust existential security after time of perils, etc.), reasonable epistemic worries ultimately arenât capable of undermining the expected value argument for longtermism.
The case for shrimp welfare isnât quite so astronomical, but the numbers are nonetheless large enough to accommodate plenty of uncertainty before the expected value dips below those of more typical charities. So it would seem similarly epistemically reckless to dismiss it as a cause area (compared to typical charities), without careful analysis.[6]
Conclusion
Strive for good judgment with numbers. Be wary of misleading appeals to complexity. Like the intellectual charlatans who use big words to hide their lack of ideas, moral charlatans send false signals of moral depth with their dismissive talk of âoversimplified quantitative modelsââas though they had a more sophisticated alternative in their back pocket. But they donât. Their alternative is unreflective vibes and Ostrich Thinking. They imagine that ignoring key factorsâimplicitly counting them for zeroâis somehow more âsophisticatedâ or epistemically virtuous than a fallible estimate. Donât fall for it. Better yet, share this corrective the next time you see such Ostrich Thinking in the wild: refusing to quantify is refusing to think.
While youâre at it, take care to avoid the conventional dogmatism that regards ultra-high-impact as impossible. Certainty bias can feel like youâre âplaying it safeââyouâre minimizing the risk of failing to make any differenceâbut is that really the most important kind of risk? Be aware of other respects in which it can be quite wildly reckless to pass up better opportunities. For example, it can be morally reckless to ignore risks of extremely bad outcomes (e.g. extinction or long-term dystopias). And, as Iâve explained in this post, it can be epistemically ârecklessââreally going out on a limb!âto assign extreme (near-zero) credence to plausible possibilities involving ultra-high impact. As long as youâre broadly open to expected value reasoning (as you plainly should be), even a fairly small chance of ultra-high impact can be well worth pursuing.
- ^
I think itâs easier to give to high-EV âlongshotsâ if you donât feel like all your eggs are in one basket, even if the âone basketâ approach technically has greater expected value. But YMMV.
- ^
Or maybe itâs more like ~5000 hours, if the stunners are used for 10 years?
- ^
Again, balance it out with a well-rounded charity portfolio if you need to. Whatever helps you to get higher expected impact than you otherwise would.
- ^
If anyoneâs aware of an argument to the contraryâthat zero is better than even just, say, 1% optimizing impartial beneficence, Iâd love to hear it. Many criticisms of EA rely upon the âall or nothingâ fallacy, and simply argue that utilitarianism (the most totalizing, extreme form that EA could conceivably take) is unappealing, as if that would somehow entail the wholesale rejection of optimizing impartial beneficence.
- ^
To be clear, Iâm a big fan of GiveWell and the charities it recommends! What Iâm objecting to here is rather a particular pattern of reasoning that could lead one to mistakenly believe that GiveWell charities are clearly superior to animal welfare and longtermist alternatives. Itâs fine to personally prefer GiveWell charities, but any minimally intelligent and reflective person should appreciate that there are difficult open questions surrounding cause prioritization, and good grounds for judging some alternatives to be even more promising. So I think itâs very unreasonable to be dismissive of any of the major EA cause areas.
- ^
Itâs not necessarily a problem to have extreme credencesâsome claims are very implausible, and should be assigned near-zero probability! But you should probably reflect carefully before forming such extreme views, especially when theyâre wildly at odds with the views of many experts who have looked more closely into the matter.
titotal @ 2024-11-19T00:21 (+17)
On multiple occasions, I've found a "quantified" analysis to be indistinguishable from a "vibes-based" analysis: you've just assigned those vibes a number, often one basically pulled out of your behind. (I haven't looked enough into shrimp to know if this is one of those cases).
I think it is entirely sensible to strongly prefer cause estimates that are backed by extremely strong evidence such as meta-reviews of randomised trials, rather than cause estimates based on vibes that are essentially made up. Part of the problem I have with naive expected value reasoning is that it seemingly does not take this entirely reasonable preference into account.
MichaelDickens @ 2024-11-19T03:26 (+20)
A vibes-based quantitative analysis has the virtue that it's easier to critique than a vibes-based non-quantitative analysis.
Richard Y Chappellđ¸ @ 2024-11-19T02:42 (+8)
Yeah, I agree that one also shouldn't blindly trust numbers (and discounting for lack of robustness of supporting evidence is one reasonable way to implement that). I take that to be importantly different from - and much more reasonable than - the sort of "in principle" objection to quantification that this post addresses.
CBđ¸ @ 2024-11-18T21:45 (+11)
Another comment : regarding the value of longtermist intervention, while I understand numbers can be very high, my main uncertainty is that I'm not even sure a lot of common interventions have a positive impact.
For instance, is working against X-risks good when avoiding an S-risks would allow factory farming to continue? The answer will depend on many questions (will factory farming continue in the future, what is the impact of humanity on wild animals, what will happen regarding artificial sentience, etc.), none of which have a clear answer.
Reducing S-risks seems good, though.
JWS đ¸ @ 2024-11-19T11:24 (+7)
I sort-off bounced of this one Richard. I'm not a professor of moral philosophy, so some of what I say below may seem obviously wrong/stupid/incorrect - but I think that were I a philosophy professor I would be able to shape it into a stronger objection than it might appear on first glance.
Now, when people complain that EA quantifies things (like cross-species suffering) that allegedly âcanât be precisely quantified,â what theyâre effectively doing is refusing to consider that thing at all.
I don't think this would pass an ideological Turing Test. I think what people who make this claim are saying is often that previous attempts to quantify the good precisely have ended up having morally bad consequences. Given this history, perhaps our takeaway shouldn't be "they weren't precise enough in their quantification" and should be more "perhaps precise quantification isn't the right way to go about ethics".
Because the realistic alternative to EA-style quantitative analysis is vibes-based analysis: just blindly going with whatâs emotionally appealing at a gut level.
Again, I don't think this is true. Would you say that before the publication of Famine, Affluence, and Morality that all moral philosophy was just "vibes-based analysis"? I think, instead, all of moral reasoning is in some sense 'vibes-based' and the quantification of EA is often trying to present arguments for the EA position.
To state it more clearly, what we care about is moral decision-making, not the quantification of moral decisions. And most decisions that have been made or have ever been made have been done so without quantification. What matters is the moral decisions we make, and the reasons we have for those decisions/values, not what quantitative value we place on said decisions/values.
the question that properly guides our philanthropic deliberations is not âHow can I be sure to do some good?â but rather, âHow can I (permissibly) do the most (expected) good?â
I guess I'm starting to bounce of this because I now view this as a big moral commitment which I think goes beyond simple beneficentrism. Another view, for example, would be a contractualism, where what 'doing good' means is substantially different from what you describe here, but perhaps that's a base metaethical debate.
Itâs very conventional to think, âPrioritizing global health is epistemically safe; you really have to go out on a limb, and adopt some extreme views, in order to prioritize the other EA stuff.â This conventional thought is false. The truth is the opposite. You need to have some really extreme (near-zero) credence levels in order to prevent ultra-high-impact prospects from swamping more ordinary forms of do-gooding.
I think this is confusing two forms of 'extreme'. Like in one sense the default 'animals have little-to-no moral worth' view is extreme for setting the moral value of animals so low as to be near zero (and confidently so at that). But I think the 'extreme' in your first sentence refers to 'extreme from the point of view of society'.
Furthermore, if we argue that quantifying expected value in quantitative models is the right way to do moral reasoning (as opposed to sometimes being a tool), then you don't have to accept the "even a 1% chance is enough", I could just decline to find a tool that produces such dogmatism at 1% acceptable. You could counter with "your default/status-quo morality is dogmatic", which sure. But it doesn't convince me to accept strong longtermism any more, and I've already read a fair bit about it (though I accept probably not as much as you).
While youâre at it, take care to avoid the conventional dogmatism that regards ultra-high-impact as impossible.
One man's "conventional dogmatism" could be reframed as "the accurate observation that people with totalising philosophies promising ultra-high-impact have a very bad track record that have often caused harm and those with similar philosophies ought to be viewed with suspicion"
Sorry if the above was a bit jumbled. It just seemed this post was very unlike your recent Good Judgement with Numbers post, which I clicked with a lot more. This one seems to be you, instead of rejecting the âAll or Nothingâ Assumption, actually going "all in" on quantitative reasoning. Perhaps it was the tone with which it was written, but it really didn't seem to actually engage with why people have an aversion to over-quantification of moral reasoning.
Richard Y Chappellđ¸ @ 2024-11-19T15:12 (+4)
Thanks for the feedback! It's probably helpful to read this in conjunction with 'Good Judgment with Numbers', because the latter post gives a fuller picture of my view whereas this one is specifically focused on why a certain kind of blind dismissal of numbers is messed up.
(A general issue I often find here is that when I'm explaining why a very specific bad objection is bad, many EAs instead want to (mis)read me as suggesting that nothing remotely in the vicinity of the targeted position could possibly be justified, and then complain that my argument doesn't refute this - very different - 'steelman' position that they have in mind. But I'm not arguing against the position that we should sometimes be concerned about over-quantification for practical reasons. How could I? I agree with it! I'm arguing against the specific position specified in the post, i.e. holding that different kinds of values can't -- literally, can't, like, in principle -- be quantified.)
I think this is confusing two forms of 'extreme'.
I'm actually trying to suggest that my interlocutor has confused these two things. There's what's conventional vs socially extreme, and there's what's epistemically extreme, and they aren't the same thing. That's my whole point in that paragraph. It isn't necessarily epistemically safe to do what's socially safe or conventional.
MichaelDickens @ 2024-11-19T00:35 (+6)
This has got to be one of the most common objections to EA-style cost-effectiveness analyses, and it is so deeply confused. Oddly, I canât recall seeing anyone else explain why itâs so confused.
I suspect you could mathematically prove that, given certain assumptions, a cost-effectiveness analysis is the correct thing to do in theory. My intuition is that if you make some set of decisions, then this forces you to assign numeric cost-effectivenesses to the expected outcomes of those decisions, except you're doing it implicitly instead of explicitly. I think the proof for this would look something like the proof of the VNM utility theorem.
CBđ¸ @ 2024-11-18T21:41 (+6)
Very interesting and well formulated! It highlights several hidden assumptions that can significantly reduce your ability to have an impact.
Indeed, from what I've seen, the (natural) tendency of giving very low moral value to other animals (eg less than 1000 that of a human) often stems from gut feeling, with added justifications afterwards.
Anthony DiGiovanni @ 2024-11-20T04:07 (+4)
If I understand correctly, youâre arguing that we either need to:
- Put precise estimates on the consequences of what we do for net welfare across the cosmos, and maximize EV w.r.t. these estimates, or
- Go with our gut ⌠which is just implicitly putting precise estimates on the consequences of what we do for net welfare across the cosmos, and maximizing EV w.r.t. these estimates.
I think this is a false dichotomy,[1] even for those who are very confident in impartial consequentialism and risk-neutrality (as I am!). If (as suggested by titotalâs comment) you worry that precise estimates of net welfare conditional on different actions are themselves vibes-based, you have option 3: Suspend judgment on the consequences of what we do for net welfare across the cosmos, and instead make decisions for reasons other than âmy [explicit or implicit] estimate of the effects of my action on net welfare says to do X.â (Coherence theorems donât rule this out.)
What might those other reasons be? A big one is moral uncertainty: If you truly think impartial consequentialism doesnât give you compelling reasons either way, because our estimates of net welfare are hopelessly arbitrary, it seems better to follow the verdicts of other moral views you put some weight on. Another alternative is to reflect more on what your reasons for action are exactly, if not "maximize EV w.r.t. vibes-based estimates." You can ask yourself, what does it mean to make the world a better place impartially, under deep uncertainty? If youâve only looked at altruistic prioritization from the perspective of options 1 or 2, and didnât realize 3 was on the table, I find it pretty plausible that (as a kind of bedrock meta-normative principle) you ought to clarify the implications of option 3. Maybe you can find non-vibes-based decision procedures for impartial consequentialists. ETA: Ch. 5 of Bradley (2012) is an example of this kind of research, not to say I necessarily endorse his conclusions.
(Just to be clear, I totally agree with your claim that we shouldnât dismiss shrimp welfare â I donât think weâre clueless about that, though the tradeoffs with other animal causes might well be difficult.)