Utility Cascades
By Aaron Gertler 🔸 @ 2020-07-29T07:16 (+25)
This is a linkpost to https://academic.oup.com/analysis/article-abstract/doi/10.1093/analys/anaa011/5834865
Original paper written by Max Khan Hayward (thanks for engaging with EA, and for doing so much work to further public philosophy!).
Thanks to Sci-Hub for the article access.
Epistemic status: This is a basic summary and commentary that I didn't spend much time on, and my analysis may be too simple // full of holes. I'd love to hear additional thoughts from anyone who finds this interesting!
Abstract
Utility cascades occur when a utilitarian’s reduction of support for an intervention reduces the effectiveness of that intervention, leading the utilitarian to further reduce support, thereby further undermining effectiveness, and so on, in a negative spiral.
This paper illustrates the mechanisms by which utility cascades occur, and then draws out the theoretical and practical implications.
Theoretically, utility cascades provide an argument that the utilitarian agent should sometimes either ignore evidence about effectiveness or fail to apportion support to effectiveness. Practically, utility cascades call upon utilitarians to rethink their relationship with the social movement known as Effective Altruism, which insists on the importance of seeking and being guided by evidence concerning effectiveness.
This has particular implications for the ‘Institutional Critique’ of Effective Altruism, which holds that Effective Altruists undervalue political and systemic reforms. The problem of utility cascades undermines the Effective Altruist response to the Institutional Critique.
My notes
- There are cases wherein an act-utilitarian should "ostrich" (that is, refuse to update their judgments in the light of new evidence) if they want the best outcome. This poses a challenge for how act-utilitarians ought to prioritize moral and epistemic normativity.
- Sometimes, rationally updating can lead to a utility cascade. If an altruist discovers that a charity they funded now seems to be less effective than they had thought, and pulls away funding as a result, the charity may become even less effective due to the loss of resources — which could lead the altruist to pull away even more funding, leading to a further drop in effectiveness...
- The altruist may be apportioning their support based on effectiveness, but if a charity's effectiveness is not independent of their support, there is a risk of cascade. (This risk becomes higher if many people apportion their support based on the same information, since more resources will be pulled away and effectiveness drops more sharply.)
- While the altruist in question can find other charities which may seem better than the original (downgraded) charity, a utility cascade can still leave to permanent losses. For example:
- The original charity may end up shutting down due to a temporary lapse in effectiveness or a failed (but worthy) experiment.
- Or, to riff on an example for the paper, the most risk-intolerant backer of a risky venture might withdraw support, making the project more risky, leading the next-most risk-intolerant backer to withdraw support... until the venture no longer exists at all, even though nearly all funders saw it as valuable at the beginning of the cascade.
- "By the utilitarian's own lights, this is a problem. And it is not anomalous. The preconditions that permit of utility cascades are not rare."
- There must be a charity/initiative/policy that can receive different degrees of support
- Its effectiveness must depend in part on its level of support
- "Most collective attempts to make the world better [...] instantiate these features."
- Why this problem can be hard to coordinate around: "While [act-utilitarians] can share information and make plans together, they cannot undertake to perform actions that conflict with their principles."
From the paper's conclusion:
Probably the only way to address the root causes of world misery is through structural reforms – the interventions with the highest utility were they to work are systemic and political. Whether or not they do work is in part dependent on how many people pursue them. But, in a world increasingly influenced by Effective Altruists, the likelihood of people pursuing these reforms is reduced by arguments that this is an inefficient strategy. Perhaps the world would be better, in utilitarian terms, if Effective Altruists would keep quiet about the difficulty of political reform.
The good
- Short and easy to read!
- This kind of thing can definitely be an issue for EA, and it's nice to see a published summary that assigns a catchy term to replace the more general "coordination problem" for these cases.
- Memorable examples that help me hold utility cascades in mind as a single "unit" of thought; it's especially nice that there are two different examples which illustrate different instances of the problem at hand.
The bad
- The author makes assumptions about EA's approach to political reform which seem years out of date (if they were ever accurate at all)
- Seems to associate EA a bit too closely with pure act-utilitarianism, where in my experience, EA is more practical: If we notice that a predictable/rational behavior pattern seems like it will lead somewhere bad, we take steps to break that pattern. We research political campaigns and highlight those which are worth pursuing; we use forms of reasoning beyond just effectiveness calculation.
- If we did live in a world where some highly unlikely level of coordination were required to get anything done, we might run into utility cascades more frequently. Fortunately, there are plenty of good opportunities for systemic change that don't require this much risk-taking (e.g. the Center for Election Science's Fargo approval voting campaign, and the rest of their gradual, city-by-city strategy)
- It's very hard to tell when you're about to hit a utility cascade vs. when you are simply making a wise choice not to invest in something that isn't worthwhile. It seems to me as though the latter case is far more common than the former, because most uses of funding won't be nearly as good as the best uses of funding, and a low effectiveness score provides at least some evidence that you are looking at a non-"best" use of funding.
- No matter what critiques you launch at EA, in the end you have to find some way of choosing a cause to fund. The author doesn't try to present a formal method, which is of course fine, but they seem to lean toward the heuristic of "fund the sorts of things which worked in the past," which isn't very specific and doesn't seem reliable. (As often happens when I see a critique of EA, I want to ask the author what they'd fund, and why that thing, and why not various other things.)
Overall, the paper identifies a real risk that does come up in EA funding, but I think the author is too quick to dismiss EA's chances of reducing that risk in ways other than "selectively ignoring new evidence about effectiveness."
MichaelPlant @ 2020-07-29T14:24 (+11)
I enjoyed reading the paper but was unconvinced any serious problem was being raised (rather than merely a perception of a problem resulting from a misunderstanding).
Put very simply, the structure of the original case is that person chooses option B instead of option A because new information makes option B look better in expectation. It then turns out that option A, despite having lower expected value, produced the outcome with higher value. But there's nothing mysterious about this: it happens all the time and provides no challenge to expected value theory or act utilitarianism. The fact that I would have won if I'd put all my money on number 16 at the roulette table does not mean I was mistaken not to do so.
trammell @ 2020-07-29T17:39 (+5)
One Richard Chappell has a response here: https://www.philosophyetc.net/2020/03/no-utility-cascades.html
Max_Daniel @ 2020-07-29T12:33 (+4)
[Only skimmed Aaron's notes, didn't read the paper, so might be quite off.]
At first glance, this seems like a special case of (e.g.) Parfit's observation in the first part of Reasons and Persons that consequentialist views can imply it'd be better if you didn't follow them, didn't believe in them etc. (similar to how prudential theories can imply that in some situations it'd be better for you if you were 'rationally irrational'). Probably the basic idea was already mentioned by Sidgwick or earlier utilitarians.
I.e. the key insight is that, as people often put it, utilitarianism as a 'criterion of rightness' does not imply we ought to always use utilitarianism (or something that looks like a 'direct' application of it) as a 'decision procedure'. Instead, consequentialist criteria of rightness transform the question which decision procedures to use into a purely empirical one. It's trivial to construct contrived thought experiments where the 'correct' decision procedure is arbitrarily bizarre.
I think this kind of cuts both ways:
- On one hand, to say something interesting, papers like the above need to engage in empirical investigations: They need to say something about when and how often situations in which it'd be best for the world to use some 'non-consequentialist' decision procedure actually occur. E.g., does this paper give convincing examples for 'utility cascades', or arguments for why we should expect them to be common?
- On the other hand, it means that (by consequentialist lights) the appropriateness of EA's principles, methods etc. is a purely empirical question as well. They depend on one's normative views as much as they depend on a host of contingent facts, such as the track record of science, how others react to EA, etc.