We don't have evidence that the best charities are over 1000x more cost effective than the average
By Will_Davison @ 2025-05-05T07:31 (+62)
I very frequently hear the statement "the best charities are over 1000x more cost effective than the average". This is often alongside the accompanying graph.
Where does this figure come from? Most sources link it to Toby Ord's 2013 paper "The Moral Imperative toward Cost-Effectiveness in Global Health".
The data in this paper comes from the 2006 paper "Disease Control Priorities in Developing Countries".
How should this information affect our claims as EAs?
1. We should not extrapolate this claim to charities unless we have direct evidence (please comment with the best evidence you have seen). We should also not extrapolate this to fields outside of health development- in particular fields that involve creating change in complex systems, for which change-making is less measurable and less linear.
2. We should be transparent about what we mean by 'effective'. Just because some charities use less measurable methods, such as forming grassroots social movements for political change, doesn't mean that we know that they are less effective.
This is extremely important, because these statements often redirect funds from non-EA organisations towards EA organisations, and doing so should not be taken lightly.
Cody_Fenwick @ 2025-05-05T08:51 (+74)
At 80,000 Hours, we published an article on this topic in 2023 by Benjamin Todd. It's a follow-up to Toby Ord's original work, and looks at other datasets and cause areas.
Benjamin concluded:
Overall, I think it’s defensible to say that the best of all interventions in an area are about 10 times more effective than the mean, and perhaps as much as 100 times.
And also:
People in effective altruism sometimes say things like “the best charities achieve 10,000 times more than the worst” — suggesting it might be possible to have 10,000 times as much impact if we only focus on the best interventions — often citing the DCP2 data as evidence for that.
This is true in the sense that the differences across all cause areas can be that large. But it would be misleading if someone was talking about a specific cause area in two important ways.
There's a ton more detail in the article.
Lorenzo Buonanno🔸 @ 2025-05-05T13:45 (+26)
There's also this article from Giving What We Can with some examples[1], which claims
Our research team believes that many of us can easily 100x our impact by giving to charities that achieve more per dollar spent.
Personally, having looked at some average charities, I think both articles downplay the difference in practice
Some quick reasons why I think so:
- If I remember correctly, the 80k article compares the mean of the top 2.5% interventions with the mean of all interventions. This artificially caps the maximum possible difference at 50x. I think that GiveWell-recommended charities are in the top ~0.1% of charities, and so can have a much higher relative difference.
- Training a guide dog costs at least €25,000[2] in Italy, and it's not clear how much of it funges with public funds for the same program. A guide dog typically works for 6-8 years, so with €3.5k you can maybe cause a blind person to have a guide dog for one extra year. If you compare it to what the same donation to a GiveWell top charity achieves with the same amount, the difference is (at least to me) clearly much more than 10x. You could argue that these are different "cause areas", but to me they are both "human health and wellbeing"
- Personal anecdote: my parents are going to a concert for charity next week, and invited a friend of theirs. We looked up what the funds are going to, and it turns out they are raising €40,000 to purchase "two devices useful for preventing alopecia induced by chemotherapy". Their friend is bald and was disappointed after knowing that, but none of them would have checked, and I imagine they would have assumed to be funding something within an order of magnitude of cost-effectiveness as any other donation to charity.
I think this is representative of what the "average charity" in a wealthy country looks like, and I'm confident that an extra €40,000 to Remote Health Centers In Uganda would buy something that's significantly more than 10x as useful as those alopecia-prevention devices. - According to Forbes, the biggest charity by revenue in the US is Feeding America at $5B/year. 6 of the top 10 also serve domestic needs. There are many reasons to believe that your dollar goes much further overseas. Taimaka claims that its average cost per malnutrition treatment is $100 per child. There is a lot of evidence to think that a marginal donation to Taimaka is much more than 10x more cost-effective than a marginal donation Feeding America for preventing hunger.
- ^
I write software at GWWC, but I didn't contribute to this article in any way
- ^
About $28,000. For the rest of the comment you can approximate €1 ~ $1
Will_Davison @ 2025-05-06T18:19 (+7)
Your comment relates to interventions that directly target improving patient health, and I think that Toby's paper applies well to these examples. My difficulty is rather with using it to analyse charities outside of global health, or charities that create less measurable forms of change, as highlighted in the end of Cody's paper.
Will_Davison @ 2025-05-06T18:16 (+3)
This is an awesome article and I hadn't seen it before posting. Thanks for sharing and for all the work in writing it:)
JoelMcGuire @ 2025-05-05T16:23 (+45)
In our chapter for the World Happiness Report we think we provided the most direct evidence answering this question. But the sample of charities isn't necessarily representative, but more of a convenience sample. More work needs to be done estimating the effect of the "typical charity" which is tough since a large amount of total expenditure is done by large, practically unevaluatable charities. The differences are still massive though. Especially when we compare them to expectations.
NickLaing @ 2025-05-05T08:15 (+36)
I agree with point 2 to some extent but not point one. We have direct evidence from multiple randomized controlled trials that show us without much doubt that the best interventions are 10-100x more cost effective at saving lives than many others. Like @Cody_Fenwick says maybe not 1,000
Just because an intervention is complex doesn't necessarily mean the outcome is complex as well. Many complex interventions are more measurable than we think.
I discuss this a littler more here https://forum.effectivealtruism.org/posts/w44oxwXpRkzyEEEHr/the-best-health-systems-strengthening-interventions-barely
Here's some good reflections as well from Kevin Starr from Mulago foundation along similar lines
https://www.mulagofoundation.org/articles/in-numbers-we-trust
Yes there are interventions which are hard to measure, but not as often as we might think.
Will_Davison @ 2025-05-06T18:53 (+5)
I really enjoyed reading your HSS post and think you have some great points in there. I like how you take out some of the vague language of 'intervening in complex systems' that is often used to justify unsuccessful top down managerial changes in large organisations. Most complexity theory that I have come across would absolutely
I think the Mulago foundation article has some great points, such as trust and data not being mutually exclusive. Toby Lowe also has a great talk about this. But the article is also is too dismissive of applying non-quantitative funding e.g. to illiterate groups, or groups working on changing cultural values through art. I think the article is written to be clickbaity and controversial, which is a style I don't find especially constructive.
I think the reason you disagree with point one might be that you are interpreting 'complex systems' still in the healthcare service provision field, whereas when it comes to health I would extend it to systems such as air pollution and income inequality, which are highly bound to complex political systems, where interventions are hard to measure using RCTs due to small sample sizes and a lack of counterfactuals. As has become my catch phrase, most disagreement is a result of miscommunication.
Interested to hear your thoughts
Jamie_Harris @ 2025-05-05T16:38 (+19)
I agree with other comments that the 80k article is the place to go.
But I also want to specifically praise and thank the original poster for (1) noticing an important seeming empirical claim being bandied around (2) noticing that the evidence being used seemed insufficient (3) sharing that potentially important discovery.
(For what it's worth, before the 80k article, I also worried that people in the EA community were excessively confident in similar claims.)
Also, even if charities differ significantly on a specific, narrow metric, they may differ less substantially in terms of various indirect and knock on effects (which also matter). See https://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/
Jamie_Harris @ 2025-05-05T16:41 (+7)
(Ironically, I suppose the title -- "We don't have evidence that the best charities are over 1000x more cost effective than the average" -- is also an overly confident claim, where a question might have been better, unless the original poster had carried out an exhaustive search for relevant evidence)
Will_Davison @ 2025-05-06T18:07 (+3)
Thanks for the first comment and for this note! I hadn't seen the 80k article, which would have been a useful document to feed in. But regardless I think the strength of the title matched the confidence of my belief (perhaps 98%)
Kevin Ulug @ 2025-05-05T15:04 (+11)
Since we're on this topic, I recently saw that the Happier Lives Institute estimated that the best charities (based on WELLBYs per dollar) are about 100x more cost-effective than the average charity https://www.happierlivesinstitute.org/world-happiness-report/
I have given a shallow recap of their medium-depth report, so take with a grain of salt.
Yarrow🔸 @ 2025-05-05T09:43 (+10)
The data in this paper comes from the 2006 paper "Disease Control Priorities in Developing Countries".
I don't understand. Does this paper not support the claim?
I've actually never heard this claim before, personally. Instead, people like Toby Ord talked about how the cost of curing someone's blindness through the Fred Hollows Foundation was 1,000x cheaper than training a seeing eye dog.
Will_Davison @ 2025-05-06T18:13 (+2)
I agree that the paper support the claim. I highlighted the title to clarify the niche subject matter of the graph, which is also adequately described in Toby's paper. My reason for doing so was to show that you can't extrapolate from this context to charities in general.
Yarrow🔸 @ 2025-05-06T18:39 (+3)
Okay. Thanks. I guessed maybe that’s what you were trying to say. I didn’t even look at the paper. It’s just not clear from the post why you’re citing this paper and what point you’re trying to make about it.
I agree that we can’t extrapolate from the claim "the most effective charities at fighting diseases in developing countries are 1,000x more effective than the average charity in that area" to "the most effective charities, in general, are 1,000x more effective than the average charity".
If people are making the second claim, they definitely should be corrected. I already believed you that you’ve heard this claim before, but I’m also seeing corroboration from other comments that this is a commonly repeated claim. It seems like a case of people starting with a narrow claim that was true and then getting a little sloppy and generalizing it beyond what the evidence actually supports.
Trying to say how much more effective the best charities are from the average charity seems like a dauntingly broad question, and I reckon the juice ain’t worth the squeeze. The Fred Hollows Foundation vs. seeing eye dog example gets the point across.
Seth Ariel Green 🔸 @ 2025-05-05T14:25 (+9)
What is the average charity? I don't have a good intuition for what it looks like, is, how big it is, what it works on etc.[1] I think pinning this down will help make the comparison clearer. Will, how do you think about this?
- ^
Sidenote: At least in the US, I would be open to the argument that the average charity -- defined as being the midpoint of some multidimensional array of size, cause area, staffing, location, etc. -- produces literally zero charitable benefit on net, and might even be doing harm. You might not share this intuition, but we have a long list of mostly null effects for pro-social interventions once they're evaluated rigorously (enterprise zones in California, medicaid enrollment in oregon, head start, etc. -- any of which you might take issue with but I think the broader point is defensible that on average, interventions don't work.) If the average social utility gain of a given nonprofit America is zero, then I don't know how we're going to say some other cause is X or Y times "better" than that. The seeing eye dog vs curing blindness comparison is a lot more coherent, I think.
Pat Myron 🔸 @ 2025-05-05T23:58 (+3)
Most nonprofit revenue isn't from charitable giving (think healthcare, education, etc):
https://taxfoundation.org/blog/501c3-nonprofit-revenue/
https://projects.propublica.org/nonprofits/search
Most American charitable giving was across hundreds of thousands of religious organizations:
https://www.philanthropyroundtable.org/magazine/less-god-less-giving/
But these organizations receive the most donations:
https://forum.effectivealtruism.org/posts/BpEt8DqrcAhKJbtfJ/america-s-100-charities-receiving-most-donations
Seth Ariel Green 🔸 @ 2025-05-06T14:22 (+2)
I think for the purposes of this comparison, non-profit and charity are probably not interchangeable, in the sense that a marginal donor with 5K to spend is almost certainly not going to donate that to Kaiser Permanente (although $1M does get you naming rights at a smaller chain!). So I guess whatever we're defining the average charity as, the distribution should probably exclude these big institutions that are nonprofit for a bunch of tax code reasons but in reality are just providing goods and services to clients in exchange for money.
(colleges are an edge case here)
Will_Davison @ 2025-05-06T18:11 (+1)
Perhaps taking a list of registered charities, and weighting their cost effectiveness by their donation revenue would be the most apt way to measure the average cost effectiveness? But I also think that we can only aptly measure the effectiveness of charities that are designed to have measurable effectiveness using RCTs. For charities with no good counterfactual or small sample sizes, quantifying effectiveness becomes impossible. Try measuring the effectiveness of Oxfam as a whole, for example.
Dylan Richardson @ 2025-05-06T18:16 (+3)
I don't have anything to add about the intra-cause effectiveness multiplier debate. But much of the multiplier over the average charity is simply due to very poor cause selection. So while I applaud OP for wanting rigorous empirical evidence, some comparisons simply don't require peer-reviewed studies. We can still reason well in the absence of easy quantification
Dogs and cats vs farmed animal causes is a great example. But animal shelters vs GHD is just as tenable.
This isn't an esoteric point; a substantial amount of donations are simply to bad causes. Poverty alleviation in rich countries (not political or policy directed), most mutual aid campaigns, feeding or clothing the poor in the rich world, most rich-world DEI related activism lacking political aims (movement building or policy is at least more plausible), most ecological efforts, undirected scholarship funds, the arts.
I'm comfortable suggesting that any of these are at least 1000x less cost effective.
Mo Putera @ 2025-05-05T11:06 (+2)
You may find this 80K article useful, both for their analysis and for all the data they collected: How much do solutions to social problems differ in their effectiveness? A collection of all the studies we could find. Bottomline is 3–10x not >1,000x for measurable interventions, and stack on a 2–10x spread for harder-to-measure interventions:
Overall, I roughly estimate that the most effective measurable interventions in an area are usually around 3–10 times more cost effective than the mean of measurable interventions (where the mean is the expected effectiveness you’d get from picking randomly). If you also include interventions whose effectiveness can’t be measured in advance, then I’d expect the spread to be larger by another factor of 2–10, though it’s hard to say how the results would generalise to areas without data.
Also this section:
3. How much can we gain from being data-driven?
People in effective altruism sometimes say things like “the best charities achieve 10,000 times more than the worst” — suggesting it might be possible to have 10,000 times as much impact if we only focus on the best interventions — often citing the DCP2 data as evidence for that.
This is true in the sense that the differences across all cause areas can be that large. But it would be misleading if someone was talking about a specific cause area in two important ways.
First, as we’ve just seen, the data most likely overstates the true, forward-looking differences between the best and worst interventions.
Second, it often seems fairer to compare the best with the mean intervention, rather than the worst intervention. ...
Overall, my guess is that, in an at least somewhat data-rich area, using data to identify the best interventions can perhaps boost your impact in the area by 3–10 times compared to picking randomly, depending on the quality of your data.
This is still a big boost, and hugely underappreciated by the world at large. However, it’s far less than I’ve heard some people in the effective altruism community claim.
In addition, there are downsides to being data-driven in this way — by insisting on a data-driven approach, you might be ruling out many of the interventions in the tail (which are often hard to measure, and so will be missing).
This is why we advocate for first aiming to take a ‘hits-based’ approach, rather than a data-driven one.
("Hits-based rather than data-driven" is quite counterintuitive, especially to someone like me who's worked most of my career in data-for-decision-guidance roles, but a useful corrective to the streetlight effect.)
Edit: whoops just saw Cody's comment above pointing to the same article.
Will_Davison @ 2025-05-06T18:16 (+1)
If you also include interventions whose effectiveness can’t be measured in advance, then I’d expect the spread to be larger by another factor of 2–10, though it’s hard to say how the results would generalise to areas without data.
I found this claim very interesting. @Cody_Fenwick would you be open to giving a little more detail on this range and how you came to it?:)
Mo Putera @ 2025-05-07T03:37 (+4)
The article is by Ben Todd, not Cody :) The fuller quote from Ben in the article is
If we were to expand this to also include non-measurable interventions, I would estimate the spread is somewhat larger, perhaps another 2–10 fold. This is mostly based on my impression of cost-effectiveness estimates that have been made of these interventions — it can’t (by definition) be based on actual data. So, it’s certainly possible that non-measurable interventions could vary by much more or much less.
Will_Davison @ 2025-05-07T17:12 (+3)
Ah thanks for pointing out my mistake! And yes, I read this paragraph in the article, but still couldn't work out how they could provide such a precise range