How to Save the World
By Richard Y Chappellđ¸ @ 2025-07-23T15:10 (+7)
This is a linkpost to https://www.goodthoughts.blog/p/how-to-save-the-world
In theory
A decade ago, Effective Altruism got an early taste of bad PR when someone at an EA Global conference was widely reported as enthusing that EA was âthe last social movement the world would ever need,â or words to that effect. The enthusiastic claim is false, for the lamentable reason that most people are insufficiently virtuousâthe idea of doing (cause-neutral) good effectively just doesnât sufficiently interest them. So we still need narrower âinterest groupâ politics. (Ick.)
Still, I think the maligned speaker can be charitably understood as highlighting an interesting philosophical point about the practical adaptability implied by EAâs cause-neutrality. While we can always imagine scenarios in which an ossified âspecial interestâ movement becomes counterproductive (consider, e.g., feminism in a world where women have come to be the more privileged gender), a cause-neutral movement thatâs always dynamically reassessing what ought to be the top priority seems immune to such concerns. As long as itâs successfully delivering on its goals (always a big âifâ!), the very nature of its goals logically guarantees that the âdo more goodâ movement is the best movement you could ask for, whatever the worldâs problems might be. (Thatâs just a long-winded way of noting that no other moral goals can compete, in principle, with doing more good. A competing principle would, by definition, sometimes prioritize doing less good. Who could want that?)
Sometimes critics get confused and imagine that EA prohibits cooperation, which would gratuitously reduce the amount of good its proponents are collectively able to do. Such critics seem unaware that the uber-principle governing EA (like consequentialism more generally) is do whatever will actually yield better results, all else equal. If that means coordinating with others to overcome a collective action problem, and thatâs something that itâs truly within our power to do, then EA (like consequentialism) will obviously recommend it!
Abstract reasoning can be hard to follow. People hear âdo more goodâ and picture all sorts of things that donât actually do more good, at which point they complain that âdoing more goodâ doesnât sound so great to them. So let me share with you a schema that, if successfully followed, ~guarantees optimal results.[1]
Note: what follow is ideal theory, not to be naively implemented by fallible humans. (But I take it that appreciating the ideal case is an important first step towards getting good practical guidance.)
A Two-Step Schema for Moral Perfection
- Identify who else is willing and able to cooperate with you (to some degree, or with some resources) in the production of the best possible consequences.
- Do your part in the best plan of action for this group of competent cooperators (bearing in mind each individualâs specified limits), in view of the behaviour of non-cooperating others, and thereby achieve the optimal outcome collectively attainable by the cooperators (given their specified limits).
(N.B. Iâm shamelessly ripping off Donald Reganâs cooperative utilitarianism here. Little known fact: moral philosophy was solved in 1980!)
If you fail at either of these stepsâincluding, e.g., mistakenly identifying as a âco-operatorâ someone who intended to co-operate but wasnât actually able to do their part in the best plan of actionâthen all bets are off. âResults not guaranteed.â Also, note that neither step necessarily directs you to invest time or resources into raising your probability of success, e.g. by inquiring into potential cooperators. Itâs conceivable in some circumstances that such investment wouldnât be worth it. In such cases, Step 1 directs you to just form a true belief (by sheer luck, perhaps), to avoid search costs. (You may be starting to notice the limitations of ideal theory.) Still, if you manage to do that, and follow both steps successfully and costlessly, then you have for sure done the optimal thing (within the constraints of the groupâs offered resources)! Neat.
Translating this into non-ideal / real-life guidance is trickier, and offers fewer guarantees. Still, I expect youâll do decently well if guided by a thought like, âTry to approximate that ideal process, accepting reasonable and proportionate search costs, and favoring robustly reliable plans over super-fragile ones, all the while taking care to avoid anything too crazy (i.e. with massive potential downside risk).â Which is basically my understanding of the EA movementâs regulative ideal.
Prioritization is Temporary
An important part of âstep 2ââformulating and following the best collective plan of actionâmay be making a priority list. For simplicity, letâs focus on donations and bracket moral uncertainty. If you have a set budget of $X, you ideally want to know:
(i) What opportunity offers the #1 best marginal cost-effectiveness for up to $X;
(ii) What opportunity offers the next-best (#2) marginal cost-effectiveness; and
(iii) After how much spending ($x) on cause #1 does its marginal cost-effectiveness fall below that of cause #2?
The ideal plan will then:
(I) Allocate $x (â¤X) to the #1 cause on your list.
(II) Recalculate steps (i) - (iii) with your new budget of $(X - x)
(III) Repeat until you either run out of money or solve all the worldâs problems.
Roughly speaking, you run through your priority list, in order, and solve all of the most important problems that you can. This doesnât mean that âeverything goes to malariaâ (or shrimp, or AI safety), because diminishing marginal returns limit the âroom for more fundingâ of the #1 spot; once that one is crossed off your list, the old #2 becomes the new #1. And so on. (Additionally, I think the best response to moral and deep empirical uncertainty is to allocate separate âbucketsâ of resources to subagents representing different moral worldviews.)
You may be wondering why Iâm stepping through this obvious reasoning so explicitly. And the answer is just that Iâve found that many people donât find it obvious.[2] When I teach undergrads about effective altruism, perhaps the most common objection I hear is that optimizing âabandonsâ those whom it would be less cost-effective to help.[3] (The Chief Executive of Oxfam GB once also objected to EA on these grounds.) But as I explain in my paper âWhy Not Effective Altruism?â, EA in fact minimizes abandonment.
Given the reality of limited philanthropic resources, we canât save everyone. Either we prioritize and make the best use of our limited resources, or we donât. In the latter case, we help fewer people (or else help a larger group much less). Anyone whose problem is left unresolved is then âabandonedâ, in the relevant sense. Non-EA approaches mean that more kids are abandoned to die of malaria (or, more generally, more of the highest-priority needs are neglected). If you hope to minimize such neglect, you should want (i) to increase the amount of resources available for philanthropic efforts, and (ii) to optimize their allocation for cost-effectiveness, so that the limited resources go further. Those are the two central goals of EA. So if you hope to minimize abandonment, you should support the goals/principles of EA. Cause prioritization is how we do the most good, and thus âabandonâ the fewest people to be left with their (importance-weighted) problems left unsolved.[4]
Conclusion
Hopefully readers will now better understand the âEA can solve anythingâ enthusiasm of 2015. Itâs not to claim that the limited population of (then-)current EAs suffice to solve everything. (That would be an absurd claim.) Rather, the point is that the core mission of EA is so all-encompassing that no genuinely good and reasonable sub-goal is excluded from its priority list. As a result, given enough talent and resources, successfully following the schematic principles laid out above would solve all the worldâs (solvable) problems.[5] So⌠what are you waiting for?
- ^
Assuming no evil demon shenanigans (e.g. promising to torture everyone if and only if one follows this schema).
- ^
One philosopher even managed to publish a paper claiming that utilitarians recommend funding charities in proportion to their effectiveness (and drawing out how this practice could lead to bad results)! Alas, the journal refused to publish my simple correction.
- ^
Perhaps the second most common objection is that if everyone did <current EA top-recommended action> that would be bad/suboptimal, so the recommendation must be bad. (Super common objection to Singerâs âFamine, Affluence, and Moralityâ: âIf everyone gave so much to charity, the economy would implode. So it must be bad!â) At least it presents a nice opportunity to explore the limitations of âwhat if everyone did that?â reasoning. - ^
Also worth noting that if you donât like the utilitarian-style weighting, you can always optimize differently. There are reasonable debates to be had about how one ranks outcomes, but thereâs really no excuse for not optimizing at all.
- ^
Thereâs something striking about that, even if the principles are too schematic to by themselves solve anything (and are sadly insufficiently popular to satisfy the âgiven enough talent and resourcesâŚâ bit). The principles remain far from trivial, as revealed by the fact that few people are even trying to follow them, and those who do inspire tremendous hostility and pushback from others. But I really do think theyâre wonderfully admirable principles, and I wish more people found them as inspiring as I do.