How to Save the World

By Richard Y Chappell🔸 @ 2025-07-23T15:10 (+7)

This is a linkpost to https://www.goodthoughts.blog/p/how-to-save-the-world

In theory


A decade ago, Effective Altruism got an early taste of bad PR when someone at an EA Global conference was widely reported as enthusing that EA was “the last social movement the world would ever need,” or words to that effect. The enthusiastic claim is false, for the lamentable reason that most people are insufficiently virtuous—the idea of doing (cause-neutral) good effectively just doesn’t sufficiently interest them. So we still need narrower “interest group” politics. (Ick.)

Still, I think the maligned speaker can be charitably understood as highlighting an interesting philosophical point about the practical adaptability implied by EA’s cause-neutrality. While we can always imagine scenarios in which an ossified “special interest” movement becomes counterproductive (consider, e.g., feminism in a world where women have come to be the more privileged gender), a cause-neutral movement that’s always dynamically reassessing what ought to be the top priority seems immune to such concerns. As long as it’s successfully delivering on its goals (always a big “if”!), the very nature of its goals logically guarantees that the “do more good” movement is the best movement you could ask for, whatever the world’s problems might be. (That’s just a long-winded way of noting that no other moral goals can compete, in principle, with doing more good. A competing principle would, by definition, sometimes prioritize doing less good. Who could want that?)

Sometimes critics get confused and imagine that EA prohibits cooperation, which would gratuitously reduce the amount of good its proponents are collectively able to do. Such critics seem unaware that the uber-principle governing EA (like consequentialism more generally) is do whatever will actually yield better results, all else equal. If that means coordinating with others to overcome a collective action problem, and that’s something that it’s truly within our power to do, then EA (like consequentialism) will obviously recommend it!

Abstract reasoning can be hard to follow. People hear “do more good” and picture all sorts of things that don’t actually do more good, at which point they complain that “doing more good” doesn’t sound so great to them. So let me share with you a schema that, if successfully followed, ~guarantees optimal results.[1]

Note: what follow is ideal theory, not to be naively implemented by fallible humans. (But I take it that appreciating the ideal case is an important first step towards getting good practical guidance.)

A Two-Step Schema for Moral Perfection

  1. Identify who else is willing and able to cooperate with you (to some degree, or with some resources) in the production of the best possible consequences.
  2. Do your part in the best plan of action for this group of competent cooperators (bearing in mind each individual’s specified limits), in view of the behaviour of non-cooperating others, and thereby achieve the optimal outcome collectively attainable by the cooperators (given their specified limits).

(N.B. I’m shamelessly ripping off Donald Regan’s cooperative utilitarianism here. Little known fact: moral philosophy was solved in 1980!)

Distributing roles in a co-operative plan

If you fail at either of these steps—including, e.g., mistakenly identifying as a “co-operator” someone who intended to co-operate but wasn’t actually able to do their part in the best plan of action—then all bets are off. “Results not guaranteed.” Also, note that neither step necessarily directs you to invest time or resources into raising your probability of success, e.g. by inquiring into potential cooperators. It’s conceivable in some circumstances that such investment wouldn’t be worth it. In such cases, Step 1 directs you to just form a true belief (by sheer luck, perhaps), to avoid search costs. (You may be starting to notice the limitations of ideal theory.) Still, if you manage to do that, and follow both steps successfully and costlessly, then you have for sure done the optimal thing (within the constraints of the group’s offered resources)! Neat.

Translating this into non-ideal / real-life guidance is trickier, and offers fewer guarantees. Still, I expect you’ll do decently well if guided by a thought like, “Try to approximate that ideal process, accepting reasonable and proportionate search costs, and favoring robustly reliable plans over super-fragile ones, all the while taking care to avoid anything too crazy (i.e. with massive potential downside risk).” Which is basically my understanding of the EA movement’s regulative ideal.

Prioritization is Temporary

An important part of “step 2”—formulating and following the best collective plan of action—may be making a priority list. For simplicity, let’s focus on donations and bracket moral uncertainty. If you have a set budget of $X, you ideally want to know:

(i) What opportunity offers the #1 best marginal cost-effectiveness for up to $X;

(ii) What opportunity offers the next-best (#2) marginal cost-effectiveness; and

(iii) After how much spending ($x) on cause #1 does its marginal cost-effectiveness fall below that of cause #2?

The ideal plan will then:

(I) Allocate $x (≤X) to the #1 cause on your list.

(II) Recalculate steps (i) - (iii) with your new budget of $(X - x)

(III) Repeat until you either run out of money or solve all the world’s problems.

Roughly speaking, you run through your priority list, in order, and solve all of the most important problems that you can. This doesn’t mean that “everything goes to malaria” (or shrimp, or AI safety), because diminishing marginal returns limit the “room for more funding” of the #1 spot; once that one is crossed off your list, the old #2 becomes the new #1. And so on. (Additionally, I think the best response to moral and deep empirical uncertainty is to allocate separate “buckets” of resources to subagents representing different moral worldviews.)

You may be wondering why I’m stepping through this obvious reasoning so explicitly. And the answer is just that I’ve found that many people don’t find it obvious.[2] When I teach undergrads about effective altruism, perhaps the most common objection I hear is that optimizing “abandons” those whom it would be less cost-effective to help.[3] (The Chief Executive of Oxfam GB once also objected to EA on these grounds.) But as I explain in my paper ‘Why Not Effective Altruism?’, EA in fact minimizes abandonment.

Given the reality of limited philanthropic resources, we can’t save everyone. Either we prioritize and make the best use of our limited resources, or we don’t. In the latter case, we help fewer people (or else help a larger group much less). Anyone whose problem is left unresolved is then “abandoned”, in the relevant sense. Non-EA approaches mean that more kids are abandoned to die of malaria (or, more generally, more of the highest-priority needs are neglected). If you hope to minimize such neglect, you should want (i) to increase the amount of resources available for philanthropic efforts, and (ii) to optimize their allocation for cost-effectiveness, so that the limited resources go further. Those are the two central goals of EA. So if you hope to minimize abandonment, you should support the goals/principles of EA. Cause prioritization is how we do the most good, and thus “abandon” the fewest people to be left with their (importance-weighted) problems left unsolved.[4]

Conclusion

Hopefully readers will now better understand the “EA can solve anything” enthusiasm of 2015. It’s not to claim that the limited population of (then-)current EAs suffice to solve everything. (That would be an absurd claim.) Rather, the point is that the core mission of EA is so all-encompassing that no genuinely good and reasonable sub-goal is excluded from its priority list. As a result, given enough talent and resources, successfully following the schematic principles laid out above would solve all the world’s (solvable) problems.[5] So… what are you waiting for?

  1. ^

    Assuming no evil demon shenanigans (e.g. promising to torture everyone if and only if one follows this schema).

  2. ^

    One philosopher even managed to publish a paper claiming that utilitarians recommend funding charities in proportion to their effectiveness (and drawing out how this practice could lead to bad results)! Alas, the journal refused to publish my simple correction.

  3. ^


    Perhaps the second most common objection is that if everyone did <current EA top-recommended action> that would be bad/suboptimal, so the recommendation must be bad. (Super common objection to Singer’s ‘Famine, Affluence, and Morality’: “If everyone gave so much to charity, the economy would implode. So it must be bad!”) At least it presents a nice opportunity to explore the limitations of “what if everyone did that?” reasoning.

  4. ^

    Also worth noting that if you don’t like the utilitarian-style weighting, you can always optimize differently. There are reasonable debates to be had about how one ranks outcomes, but there’s really no excuse for not optimizing at all.

  5. ^

    There’s something striking about that, even if the principles are too schematic to by themselves solve anything (and are sadly insufficiently popular to satisfy the “given enough talent and resources…” bit). The principles remain far from trivial, as revealed by the fact that few people are even trying to follow them, and those who do inspire tremendous hostility and pushback from others. But I really do think they’re wonderfully admirable principles, and I wish more people found them as inspiring as I do.