Despite billions of extra funding, small donors can still have a significant impact

By Benjamin_Todd @ 2021-11-23T11:20 (+166)

I’ve written about how there’s now a lot more funding committed to effective altruism– about $50bn.

It’s natural to think this means small donors can no longer have much impact, and I’ve seen several cases of people saying they’re not sure whether their donations will do any good, because all the opportunities are being taken by large donors.

However, I think this isn’t right: more donations from small donors still have a significant impact. This means raising additional funding is still of value to the community, and I think earning to give and donating to e.g. the Long Term Future Fund, is a highly impactful thing to do – probably more impactful than the vast majority of careers.

I also think the increase in funding means there’s an opportunity to do even more good than earning to give, and that people earning to give currently should seriously consider switching to the kinds of opportunities flagged in my talk at EAG. But that doesn’t mean that small donations have no impact.

Instead:

  1. What matters is not the total amount of available funding, but the current level of cost-effectiveness at the margin. This has likely declined, but is still high.
  2. Small donors should be able to roughly match large donors in terms of cost-effectiveness by ‘funging’ with them.
  3. Small donors can sometimes beat large donors in terms of cost-effectiveness, and I provide a list of some common ways to do this.

At the end, I’ll make some comments on where I think people should donate.

1. What matters is not total funding available but marginal cost-effectiveness

It’s true that as more funding becomes available, all else equal, we should expect more of the best opportunities to be taken, and for cost-effectiveness to decrease.

However, there is a force which limits the size of this effect: how quickly we’re able to discover new opportunities. Because effective altruism is still small and building capacity, it’s not obvious that cost-effectiveness will decline quickly.

While I think the very best opportunities involve taking a more hits based, longterm focused approach than GiveWell, their recommendations serve as a good starting point to examine these dynamics. GiveWell’s top recommendations probably constitute the ‘bar’ for neartermist work. In a recent post, Open Philanthropy’s Global Health and Wellbeing team expect to find many opportunities above this bar, but for marginal dollars to go to GiveWell.

Overall, GiveWell now seems to be targeting a cost-effectiveness of 8x GiveDirectly or higher for most donations, though about 20% funds will go towards opportunities that are 5-8x as cost-effective as GiveDirectly, and so additional donations should be about this cost-effective.

GiveWell is unsure whether the margin will be closer to 5x than 8x. In the same post, Open Philanthropy says “we currently expect GiveWell’s marginal cost-effectiveness to end up around 7-8x GiveDirectly”.

They also say they believe that GiveWell’s margin has been around 10x GiveDirectly in recent years, so if it declines to 7x, that will be a 30% fall – this is only a modest decline and still very high.

To illustrate, they estimate that donating $4,250 to a charity that’s 8x GiveDirectly is as good as saving the life of a child under five.

With a lognormal distribution of cost-effectiveness, there should be many more opportunities at the 5x level than the 10x level, so it should be possible to deploy a lot more funds as the bar lowers. (Even setting aside the possibility of discovering new highly cost-effective interventions.)

In a worst case scenario, billions could be spent on cash transfers at a level of cost-effectiveness similar to or only a little below GiveDirectly. This would most likely still produce 100 times more wellbeing for the world than spending the money on your own consumption.
 

Grace, 48, is an example of a recipient of grants by GiveDirectly.

Over 500 million people live in extreme poverty and over 1 million die per year of easily preventable causes – and we’re a long way from solving these terrible problems.

Within longtermism, it’s harder to say where the bar is, but you can look at recent grants by the Long Term Future Fund (or Open Philanthropy within the relevant causes) to get a sense. The worst of these grants look pretty good to me.

Personally, I would donate to the Long Term Future Fund over the global health fund, and would expect it to be perhaps 10-100x more cost-effective (and donating to global health is already very good). This is mainly because I think issues like AI safety and global catastrophic biorisks are bigger in scale and more neglected than global health. Coming up with an actual number is difficult – I certainly don’t think they’re overwhelmingly better. My estimate of 10-100x is in line with the median response at the 2018 Leaders Forum (and this group seems to represent the most engaged community members pretty well, though might be tilted towards longtermism).

My sense is that the bar within longtermism has come down a little bit compared to a few years ago – back then we weren’t providing much funding for things like PhD programmes, which strike me as somewhat less effective than funding core organisations (though still well worth it).

On the other hand, since longtermism is so new, there is also a lot more potential to generate and discover highly effective opportunities as the capacity of the community grows. It wouldn’t surprise me if the bar stays similar in the coming years.

Again, in a worst case scenario, there are ways that longtermists could deploy billions of dollars and still do a significant amount of good. For instance, CEPI is a $3.5bn programme to develop vaccines to fight the next pandemic – that could easily be topped up by $1bn (ideally restricted to work to develop vaccines for novel pathogens). (See more ideas.) These kinds of scalable opportunities are likely 10-100x less effective than the top longtermist opportunities we’re able to find today, but still very good (and if you put reasonable credence in longtermism, plausibly still more effective than GiveWell recommended charities).

I also expect research will uncover better scalable longtermist donation opportunities in the coming years, which means that investing to give when those opportunities arise is a more attractive option (compared to donors focused on global health).

If longtermism attracts supporters ahead of our expectations, the bar may fall further. But again, society spends less on reducing existential risk than it does on ice cream, so we could spend orders of magnitude more on longtermist aligned issues, and it would still be a minor global priority.

(Extra info on diminishing returns in longtermism: Returns probably diminish faster in longtermism than in neartermism. But longtermists also care more about the all time total amount of resources invested in an issue than how much is invested each year. This means what matters for diminishing returns are changes in how much you expect to be spent in longtermism aligned ways in the future. This means that additional funding only drives down expected returns if it's ahead of what you already expected to be spent. So we care more about 'positive surprises' than changes in the total of committed funds.)

2. Small donors should be able to roughly match large donors in terms of effectiveness

Open Philanthropy recently donated $11 million to the Centre for Human Compatible AI (CHAI) at UC Berkeley, and has donated to it in the past. It’s natural to wonder what a small donor can possibly add, and feel like ‘all the best opportunities are taken’.

But if you were to donate $1,000 to CHAI, then either:

1. You expand CHAI’s available funding by $1,000. The cost-effectiveness of this grant should be basically the same as the final $1,000 that Open Philanthropy donated.

2. Or Open Philanthropy donates $1,000 less to CHAI in their next funding round. In this case you’ve been ‘funged’ by Open Philanthropy. But then that means that Open Philanthropy has an additional $1,000 which they can grant somewhere else within their longtermist worldview bucket.

In reality, some combination of the two probably happens. But either way, the effectiveness of your donation is about the same as marginal donations made by Open Philanthropy.

The same dynamic happens in global health with donations to the Against Malaria Foundation or similar.

Small donors also funge Open Philanthropy. Taking the big picture, around $500m is donated by people in the effective altruism community each year. Additional donations – whether from small or large donors – expand that pot. Unless you’re donating to something that absolutely no-one else would have donated to otherwise (which is rare), additional donations have similar cost-effectiveness, whether from small or large donors.

So, if you think that Open Philanthropy (or other large donors) are able to do good with marginal donations, then you can do a similar amount of good per dollar yourself.

I find this pretty encouraging – as a small donor, I can achieve a similar cost-effectiveness to additional grants by Open Philanthropy, a foundation that employs many of the smartest people within effective altruism to do full-time research into where to give, and I don’t have to do any work at all, except look through their grants database.

(Carl Shulman provides a similar argument for the same conclusion, by pointing out that if large donors are able to achieve a higher level of cost-effectiveness as small donors, then you could use a donor lottery to turn your small donation into a large donation.)

3. Small donors can sometimes beat large donors in terms of cost-effectiveness

By focusing on grants that are harder for large donors to find, you might be able to be even more effective.

The ideal funding ecosystem would probably include a significant number of “angel” donors, aiming to source opportunities that can later be scaled up by “VC donors” like Open Philanthropy or the new FTX Foundation.

Here are some categories that are hard for large donors to take right now:

It’s also not healthy for an organisation to depend 100% on a single foundation for its funding. This means that until we have 3+ large foundations covering each organisation, small donors play a role in diversifying the funding base of large organisations. (Though note that you’re only providing this benefit if your grantmaking process is independent from the large donors.)

For all these reasons, I think there’s definitely scope for small donors to achieve cost-effectiveness that’s actually higher than large foundations.

Other benefits of small donors to the community

Small donors can also play a valuable role providing insurance. Good Ventures currently intends to disperse all their money within a few decades. If the FTX Foundation takes a similar route, and EA turns out to be a fad that doesn’t keep attracting new large donors, then there’s a possibility that EA or longtermism have much less money in the future. In this scenario, it would be valuable for there to be a source of funding that could step in. Small donors could save money in a DAF or the Patient Philanthropy Fund and provide this insurance.

More importantly, making donations also helps to build the effective altruism community, since it’s a hard-to-fake symbol that we’re serious about doing good, and that helps to get more people on board.

Many also find that making donations helps them stay committed to doing good more broadly i.e. it’s a self-signal too.

Where should you actually donate?

Broadly, your options are to:

  1. Delegate your grantmaking to someone else
  2. Do your own research.

Within the delegate category, some common options are:

If you want to do your own research, we give some pointers on how to do your own research in our article on choosing where to donate. In general, you should focus on the categories listed above for how small donors can beat large donors.

Whether to delegate or do your own research isn’t an obvious decision – simply topping up Open Philanthropy grants is already pretty effective. At the same time, many organisations that claim to have funding gaps have already been considered and rejected by the large grantmakers. So, while I think it is very possible to beat Open Philanthropy and other large donors. in terms of cost-effectiveness, it’s not trivial. You need to think about how much research it would take vs. the opportunity costs of that research (e.g. focusing on advancing your career and therefore having more to donate in the future).

Personally, I think for someone who roughly shares our view of global priorities, doesn’t regularly encounter idiosyncratic opportunities that aren’t visible to big donors, and doesn’t have much time for research relative to the size of their donations, then donating to the Long Term Future Fund or Infrastructure Fund seem like good options.

If you feel interested to do your own research, and especially if you have specific ideas about how you might beat those three funds, I’d also encourage that. You’ll probably learn a lot from the exercise. If you don’t have enough money right now to justify the fixed costs of doing your own research, enter the donor lottery.

Personally, I would probably enter the donor lottery, and then if I win, think hard about who to delegate my giving to (since I don’t think my comparative advantage lies in assessing where to donate). I might also try to make grants to individuals or projects that are less central to EA or startup nonprofits, if I knew about them. If I had to donate immediately without any time to research the decision, I would likely give to the Long Term Future Fund or to the Global Priorities Institute (because I think I’m a bit keener on global priorities research than typical).

I’d especially encourage people who might donate $100k - $2m per year to think seriously about making ‘angel’ donating a significant focus e.g. choose an area to specialise in and spend 1-2 days per month on research, or run an open round like the new ACX grants. I’d also recommend speaking to existing grantmakers (like the EA Funds) for advice and to hear about opportunities. If you might donate $10m+, then it could be worth hiring a researcher. If you don’t have time for this now, you could consider investing to give, and then donating when you have more time to do research in the future. Though, focusing on earning more and delegating your decisions is also a reasonable option (e.g. this is what Warren Buffett decided to do when he delegated his donations to Bill Gates).

FAQ

Why are there billions of dollars sitting around and not being donated?

The best strategy for a large foundation is often to only donate a couple of percent of their endowment per year (depending on how pivotal the current moment is). One reason is that there are some diminishing returns to opportunities each year, so all else equal, it’s more effective to spread your donations out over time rather than give everything right away. Another reason is that investment returns means you’ll have more money to donate in the future. What percentage exactly to donate each year is a difficult question – see more on the arguments for this in our interview with Phil Trammell – but everyone agrees the correct answer isn’t 100%.

This means we should always expect there to be billions sitting around and not being donated. So the fact that there are billions committed to effective altruism and not doing anything doesn’t tell us much about how effective marginal donations actually are.

That said, I estimate the community is only donating about 1% of available capital per year right now, which seems too low, even for a relatively patient philanthropist.

I think the main reason for this is that available funding has grown pretty quickly, and the amount of grantmaking capacity and research has not yet caught up. I expect large donors to start deploying a lot more funds over the coming years. This might be starting with the recent increase in funding for GiveWell.

My contribution feels tiny in comparison

It’s true that a $100 million donation has a lot more impact than a $1000 donation to the same cause. It can easily be demoralising to compare ourselves to others who are achieving a lot more than us.

But we should do our best to resist these comparisons. It’s always possible to find someone or something that makes our contribution seem small in comparison. But these comparisons are not very relevant to real decisions. As individuals, the question we should ask is “what’s the best thing I can do with the resources I (actually) have?”.

If there are highly effective ways to do good – where the money could do a lot more good than spending it on yourself – then it’s seriously worth considering donating to them.

While a single person can’t change the entire world, your donations can make an enormous difference to the people they affect.

Further reading


NunoSempere @ 2021-11-23T20:14 (+63)

This article is kind of too "feel good" for my tastes. I'd also like to see a more angsty post that tries to come to grips with the fact that most of the impact is most likely not going to come from the individual people, and tries to see if this has any new implications, rather than justifying that all is good.

For instance, 

More generally, maybe the patterns in the early EA community were more suitable to a social movement without  billionaires, and there are better patterns that we could be executing now. For instance, maybe trying to get prestige outside of EA dominates earning to give now that EA is better funded. Or maybe EA is better funded but you'd still expect most people to have idiosyncratic preferences not shared by central funders.

tylermaule @ 2021-11-23T21:31 (+7)

I believe both this post and Ben’s original ‘Funding Overhang’ post mentioned that this is an update towards a career with direct impact vs earning-to-give.

But earning-to-give is still very high impact in absolute terms.

Benjamin_Todd @ 2021-11-23T22:12 (+16)

Yes, my main attempt to discuss the implications of the extra funding is in the Is EA growing? post and my talk at EAG. This post was aimed at a specific misunderstanding that seems to have come up. Though, those posts weren't angsty either.

Arepo @ 2021-11-26T12:56 (+3)

I agree, this makes me uncomfortable. I feel like the large organisations still lack a lot of transparency in their reasoning - I still don't really understand why OpenPhil don't fill all of the funding gaps in Givewell's top charities, for eg. And this post reads rather like a post hoc justification for this lacuna.

MichaelPlant @ 2021-11-23T22:22 (+49)

[restating and elaborating on what I said on twitter]

Thanks very for this update, Ben. The "EA has loads of money" meme has unfortunately led people to (incorrectly) assume that everything 'within EA' was fully funded. This made it harder to fundraise, particularly for small orgs, like mine, who do need new donors, because prospective donors assumed they weren't necessary.  

Of course, the meme had no impact on organisations that are already fully-funded - which is more or less only those orgs being funded by Open Philanthropy. 

jared_m @ 2021-11-23T17:00 (+25)

Completely agree with the thrust of this post. 

I do have one small phraseology  suggestion here:

I’d especially encourage ‘medium’ donors (e.g. people who might donate $200k - $2m per year) to think seriously about making ‘angel’ donating a significant focus e.g. choose an area to specialise in and spend 1-2 days per month on research...

You might reconsider labeling this group as 'substantial' donors (rather than 'medium' donors) given the historic average of <$10K/year for EA donors. That survey data suggests the $2M donor level is likely close to the far right end of the donor distribution. There may be a risk that framing the $200K-$2M range as 'medium' raises eyebrows and undoes some of the goals of the post. Given efforts to reduce legacy ETG-centric perceptions of EA from the 2010s, I worry this tack could rub those new to EA the wrong way. The average donor may be surprised to learn EA orgs perceive a $1.99M/year donor as medium-sized. They may wonder if a ≤$199K donor would be a small donor, and perhaps a ≤$99K donor is extra-small and (core message of this post aside) near-inconsequential? If a donor with a typical income perceives  four- or five-figure annual donations as a source of real financial anxiety — but perhaps 'extra-small' in the lights of EA leaders — that may be demoralizing, and reduce their warm glow for making future donations at that scale in the future. 

One point I really like is:

making donations also helps to build the effective altruism community, since it’s a hard-to-fake symbol that we’re serious about doing good, and that helps to get more people on board.

For new and small-scale charities, I also hope there's a warm glow on the recipient side when individuals chip in $100 here and there to smaller meta and other charities via opportunities like this.

WilliamKiely @ 2021-11-23T17:59 (+2)

Ben never used the term "medium" again so he could have just written "I’d especially encourage people donating $200k - $2m per year to think seriously..."

Benjamin_Todd @ 2021-11-23T18:20 (+10)

I agree that's better - have changed it.

WilliamKiely @ 2021-11-23T18:00 (+2)

That said, I'm fine with the label "medium donor" for what Ben was referring to because I like language that conveys that the vast majority of donors are "small donors." There are far more people who donate 3-5 figures annually (<$100k per year) than there are who donate 6-8 figures annually ("megadonors") or 9+ figures annually ("gigadonors"). Calling a median EA donor who donates 4 figures a "medium donor" would feel wrong.

Kerry_Vaughan @ 2021-11-23T23:36 (+21)

If I imagine being someone who is new-ish to EA, who wants to do good in the world and is considering making donations my plan for impact, I imagine that I really have two questions here:

  1. Is donating an effective way to do good in the world given the amount of money committed to EA causes?
  2. Will other people in the EA community like and respect me if I focus on donating money?

I think question 2) understandably matters to people, but it's a bit uncouth to say it out loud (which is why I'm trying to state it explicitly).

In the earliest days of EA, the answer to 2) was "yeah, definitely, especially if you're thoughtful about where you donate." Over time, I think the honest answer shifted to "not really, they'll tell you to do direct work." I don't know what the answer is currently, but reading between the lines of the article I'd guess that it's probably close to "not really" than "yeah definitely."

Assuming that earning to give is in fact quite useful, this seems like a big problem to me! It's also a very difficult problem to solve even for high-status community members.

I'd be interested in thoughts on whether this problem exists today and if so, what individual members of the community can do to fix it.

Aaron Gertler @ 2021-11-24T11:05 (+19)

On (2), I'll say something I've said a few times before on the Forum: I like and respect people who donate money. It seems like a very good character trait to be willing to make sacrifices to help others much more than you could help yourself. 

And feeling any less good about someone's donations because they could be working on a "better" career makes little sense to me — I don't dislike myself for being less than maximally productive in my own career, so extending dislike to someone who (like me, like almost everyone) has chosen a "less-than-maximal-impact" path seems foolish.

Whether someone focuses on donations vs. career may affect certain practical decisions — I think the average donation-focused person will get relatively less out of EA Global than most career-focused people — but it shouldn't affect "status" in the sense of clearly belonging to this community. If someone is behaving in a way that makes others feel lower-status in this situation, they should stop.

(I work at CEA, and while I don't speak for my colleagues, I think that almost all of them would endorse almost everything I've said here.)

MichaelPlant @ 2021-11-24T10:51 (+9)

Yeah, does not seem like a good outcome if people are donating, say, 10% of their salary, then they come to EA events and they get the feeling that people look down their noses at them as if to say "that's it? You don't have an 'EA' job?"

WilliamKiely @ 2021-11-23T17:35 (+20)

Small donors can sometimes beat large donors in terms of cost-effectiveness, and I provide a list of some common ways to do this.

Another common way to do this that you didn't mention: Small donors can use their donations to counterfactually direct matching funds offered by large non-EA donors to highly-effective nonprofits. Common counterfactual donation matches:

I'm not sure how much employer matching goes to EA-aligned nonprofits, but about $1m/year is currently counterfactually directed to EA-aligned nonprofits from the Facebook and Every.org matches. Counterfactual matching opportunities have existed consistently each year since at least 2017. Plausibly they may go away soon, but for the time being they are still exploitable at the margin and definitely offer a way that small donors can outperform large donors from a cost-effectiveness standpoint.

Benjamin_Todd @ 2021-11-23T18:21 (+10)

Makes sense - have added a note to the list.

alfredoparra @ 2021-12-01T07:09 (+1)

And maybe the Double Up Drive? Or would you recommend against donating to it based on previous discussions?

tylermaule @ 2021-11-23T14:09 (+16)

Thanks for writing; I too have worried that many folks got the wrong impression here.

jackmalde @ 2021-11-23T22:52 (+12)

One point I don't think has been mentioned in this post is that a small donation to the Patient Philanthropy Fund could end up being a much larger donation in the future, in real terms, due to likely investment returns. Couple that with probable exogenous learning over time on where/when best to give, a small donation to PPF now really could do a phenomenal amount of good later on. 

More on this in Founders Pledge's report.

MichaelStJules @ 2021-11-23T22:18 (+10)

But if you were to donate $1,000 to CHAI, then either:

1. You expand CHAI’s available funding by $1,000. The cost-effectiveness of this grant should be basically the same as the final $1,000 that Open Philanthropy donated.

2. Or Open Philanthropy donates $1,000 less to CHAI in their next funding round. In this case you’ve been ‘funged’ by Open Philanthropy. But then that means that Open Philanthropy has an additional $1,000 which they can grant somewhere else within their longtermist worldview bucket.

In reality, some combination of the two probably happens. But either way, the effectiveness of your donation is about the same as marginal donations made by Open Philanthropy.

 

I am much more pessimistic about both cases.

In case 1, room for more funding estimates likely indicate estimates for thresholds for fairly steeply diminishing returns or even barriers to expansion, e.g. they don't expect to find another individual worth hiring or nearly as good as their last hire, or have the capacity to manage them. If Open Phil is aiming to make sure this threshold is always met or overshot slightly (which they should do, but might not be doing; I don't know; they could also follow up with orgs earlier to fill in missing gaps), then additional funding will have much worse returns, or just roll into future expenses which could have been paid for with future grants or donations, but those will just displace more future grants/donations, and so on. In the case where Open Phil overshot, that would speak poorly for the cost-effectiveness of the last $, so we shouldn't be too happy about matching that.

In case 2, Open Phil might not find anywhere good enough to grant that extra $1,000, or it won't otherwise be used soon and will just offset their own or others' future donations/grants. There is likely a reasonably large gap between the organizations they grant to and those they looked into but didn't grant to in terms of marginal cost-effectiveness (likely smallest in global health and development, at around 5x-10x, because of the size of the field and them not filling GiveDirectly's funding gap), since otherwise they would have made more grants. The fact that Open Phil has been granting <2% of its endowment yearly despite aiming to spend it all within the founders' lifetimes is a sign that they would not find additional similarly cost-effective opportunities. It's not for lack of trying, and it's not that they don't have the money to hire more researchers, either. The reason Open Phil isn't granting to more organizations is likely because there's a big gap in expected cost-effectiveness between its last grants and the next best ones it decided to not do.

At some point, if an organization is funded enough without Open Phil, Open Phil might spend less time evaluating it, and have more time to evaluate others, but this seems unlikely, given that Open Phil's grants usually makes up most of EA charities' budgets.

Benjamin_Todd @ 2021-11-24T00:04 (+3)

There are no sharp cut offs - just gradually diminishing returns.

An org can pretty much always find a way to spend 1% more money and have a bit more impact. And even if an individual org appears to have a sharp cut off, we should really be thinking about the margin across the whole community, which will be smooth. Since the total donated per year is ~$400m, adding $1000 to that will be about equally as effective as the last $1000 donated.

 

You seem to be suggesting that Open Phil might be overfunding orgs so that their marginal dollars are not actually effective.

But Open Phil believes it can spend marginal dollars at ~7x GiveDirectly.

I think what's happening is that Open Phil is taking up opportunities down to ~7x GiveDirectly, and so if small donors top up those orgs, those extra donations will be basically as effective as 7x GiveDirectly (in practice negligibly lower).

 

MichaelStJules @ 2021-11-24T03:10 (+4)

There are no sharp cut offs - just gradually diminishing returns.

An org can pretty much always find a way to spend 1% more money and have a bit more impact.

The marginal impact can be much smaller, but this depends on the particulars. I think hiring is the most important example, especially in cases where salaries make up almost all of the costs of the organization. Suppose a research organization hired everyone they thought was worth hiring at all (with their current management capacity as a barrier, or based on producing more than they cost managers, or based on whether they will set the org in a worse direction, etc.). Or, the difference between their last hire and their next hire could also be large. How would they spend an extra 1% similarly cost-effectively? I think you should expect a big drop in marginal cost-effectiveness here.

Maybe in many cases there are part-time workers you can get more hours from by paying them more.

 

And even if an individual org appears to have a sharp cut off, we should really be thinking about the margin across the whole community, which will be smooth. Since the total donated per year is ~$400m, adding $1000 to that will be about equally as effective as the last $1000 donated.

I think my hiring example could generalize to cause areas where the output is primarily research and the costs are primarily income. E.g., everyone we'd identify to do more good than harm in AI safety research in expectation could already be funded (although maybe they could continue to use more compute cost-effectively?). The same could be true for grantmakers. Maybe we can just always hire more people who aren't counterproductive in expectation, and the drop is just steep, and that's fine since the stakes are astronomical.

 

You seem to be suggesting that Open Phil might be overfunding orgs so that their marginal dollars are not actually effective.

But Open Phil believes it can spend marginal dollars at ~7x GiveDirectly.

I think what's happening is that Open Phil is taking up opportunities down to ~7x GiveDirectly, and so if small donors top up those orgs, those extra donations will be basically as effective as 7x GiveDirectly (in practice negligibly lower).

I agree with this for global health and poverty, but I expect the drop in cost-effectiveness to be much worse in the other big EA cause areas and especially in organizations where the vast majority of spending is on salaries.

Scott Alexander @ 2021-11-25T07:21 (+9)

Thank you for writing this. I've seen a lot of people get confused around this, and it's genuinely pretty confusing, and it's good to have a really good summary all in one place by someone who knows what's going on.

maximumpeaches @ 2021-12-10T03:02 (+8)

The article proposes that the two main ways to be engaged in EA are either a job or donating - but doesn’t mention community building. I think this could be a fundamental flaw in thinking across the EA community and 80,000 Hours (sorry if calling it a flaw hurts anyone’s feelings, but I get the impression people reading this will be okay thinking objectively about whether it’s a flaw). Community building can’t happen because of single individuals, it takes a lot of individuals working together, so I find it striking it’s not mentioned in the article since it’s very much in line with the topic.

It’s possible that EA is still in its infancy and the amount of people working in or donating to EA is minute compared to what we’ll see in 100 years, and that the most important thing for EA could be the growth of the community.

Think of the wild success (in terms of getting people to join and contribute) of something like Catholicism. What if Christ had never come back from Mount Tabor or the desert? What if he had never preached? I just bring up religion to point out how much, as a social movement, it benefited from attracting followers. If you look at Mormons, they have their members do a mandatory program where they go try and convert other people to the religion. What does EA do for community building?

After all, what is EA without the community?

PS I don’t really have evidence for whether community building is more impactful than working or donating. I could only speculate. But I don’t see it mentioned and I think there’s a bias in this EA community to not consider it (although yes it is mentioned here and there it just doesn’t seem to be a focus area from what I’ve read so far).

edit: Some very anecdotal evidence for the importance of community building is I spent a lot of time researching how to find a more impactful career a couple years back. Mostly I was focused on computer science careers but easily could have been swayed by reading 80,000 Hours, if I had known it existed! Maybe it is my fault for not researching the right way, but I know there are tons of students out there who want to have impactful careers and only a sliver of them have heard of 80,000 Hours.

michaelchen @ 2021-12-23T00:07 (+6)

I think 80,000 Hours and related organizations like the Centre for Effective Altruism and Open Philanthropy, at least starting in recent years, actually find community-building very valuable, for some of the same reasons that you've mentioned. It's possible that in the context of this post, community-building was just subsumed in the word "career" but not called out explicitly.

Some relevant links:

maximumpeaches @ 2021-12-23T18:47 (+2)

A counterargument is that 80,000 Hours alienates a broader portion of the population that would be essential for movement building. That 80,000 Hours is geared towards only a certain well-educated portion of the population is a [known problem](https://80000hours.org/2020/04/which-programmes-within-ea/) that hopefully will be resolved soon.

Thanks for sharing those links. I'd like to check them out. Right now I have a lot of other work to do. My reply is therefore limited. I wanted to share my current line of thinking when I wrote, "I think this could be a fundamental flaw in thinking across the EA community and 80,000 Hours...". Since you've read more on the topic, I'd agree that the intention to promote community building is indeed there.

Moritz Linder @ 2022-04-05T14:54 (+3)

I tried to estimate how many "average-earning" people donating 10% of their income would match Sam Bankman-Fried's $24 billion pledged.

My estimation amounts to 70,000 people (laid out here, please comment if you disagree with an assumption).

This may sound disencouraging at first. However, that is also the amount of inhabitants in a small city or larger town in Germany. Envisioning all these happy folks matching the donating power of SBF cheers me up.

JeremyR @ 2021-11-24T22:13 (+3)

Personally, I would donate to the Long Term Future Fund over the global health fund, and would expect it to be perhaps 10-100x more cost-effective (and donating to global health is already very good). This is mainly because I think issues like AI safety and global catastrophic biorisks are bigger in scale and more neglected than global health. Coming up with an actual number is difficult – I certainly don’t think they’re overwhelmingly better. 

Not to pick nits but what would you consider “overwhelmingly better?” 1000x? I'd have said 10x  so curious to understand how differently we're calibrated / the scales we think on. 

Benjamin_Todd @ 2021-11-25T10:49 (+2)

There isn't a hard cutoff, but one relevant boundary is when you can ignore the other issue for practical purposes. At 10-100x differences, then other factors like personal fit or finding an unusually good opportunity can offset differences in cause effectiveness. At, say 10,000x, they can't.

Sometimes people also suggest that e.g. existential risk reduction is 'astronomically' more effective than other causes (e.g. 10^10 times), but I don't agree with that for a lot of reasons.

JeremyR @ 2021-11-26T04:17 (+1)

Got it - thanks for taking the time to respond!

WilliamKiely @ 2021-11-23T18:29 (+3)

My contribution feels tiny in comparison

[...] It can easily be demoralising to compare ourselves to others who are achieving a lot more than us.

Personally I don't feel demoralized when I think about others doing a lot more good than me (either though their donations or direct work) and I think that's because I mostly just care that the world/future is better, not who makes it better.

Learning that others are doing a fantastic job making the world better doesn't cause me to think that my expected achievements are smaller than I previously thought. It causes me to think they're smaller in a relative sense, but I don't care about that.

Rather, if anything, learning about very successful do-gooders causes me to update to think that my expected achievements may be larger than I currently think if I have some chance of emulating their extreme success. That's a reason to be happy, not demoralized.

WilliamKiely @ 2021-11-23T19:01 (+21)

The fact that total EA funding increased substantially recently should cause me to update to believe that the marginal cost-effectiveness of donations I make now and over the course of my lifetime will be less than I previously thought, but not that much less cost-effective.

I've long felt that we're nowhere to close to the world where the marginal cost-effectiveness of the best giving opportunities is low enough to mean it's not worth donating altruistically. If we lived in a world where the best giving opportunity had GiveDirectly's cost-effectiveness, I'd still find the giving opportunities cost-effective enough to want to donate a substantial amount of my money.

But the reality is we live in a world where GiveWell continues to find giving opportunities that are 7-8x more effective, and some giving opportunities in other cause areas seem to be 10-100x more cost-effective than GiveDirectly at the margin. So the small cost-effectiveness update above is not enough to make me doubt whether it's actually worth it to me to donate. It still seems clearly worth it.

MichaelStJules @ 2021-12-26T18:40 (+2)

But if you were to donate $1,000 to CHAI, then either:

1. You expand CHAI’s available funding by $1,000. The cost-effectiveness of this grant should be basically the same as the final $1,000 that Open Philanthropy donated.

If Open Phil's judgement is good enough, and Open Phil was not holding back because they believed CHAI marginal cost-effectiveness would drop below that of their marginal grantee, and instead for some other reason(s), e.g. donor coordination, the public support test, reducing dependence on Open Phil, then wouldn't this actually normally beat their final $1,000? So, in expectation, we can plausibly beat Open Phil's final $1,000 by topping up their grantees (assuming case 2 goes through well enough).

If Open Phil makes individual donor recommendations (and in the past, they've written why they haven't fully funded a given opportunity), then we can just follow those. It looks like they haven't been recommending the most well-known large EA organizations at all, though. Does Open Phil think they're fully funding these organizations (anticipating what those orgs will raise through other means)? If so, we should perhaps expect to be in case 2 almost all of the time.

Benjamin_Todd @ 2021-12-29T12:50 (+2)

Good point - seems plausible that it's a little more effective than their final $1000.

liu_he @ 2021-12-04T09:18 (+1)

Unfortunately, this post doesn't quite persuade me that small donors can be impactful compared to large donors. The gist of the post seems to be that, as long as there are professional EA fund managers, small donors may achieve a similar level of marginal impact. This seems clear enough. Since EA grant evaluators typically regrant unrestricted funding, they will just treat any dollar - whether from large or small donors - as the same. Everyone's allowed to save lives at$3000 per life. 

However, if the EA movement is asking the question 'if we needed X amount of dollars, who should we approach', would small donors still be the answer? I think this is the sort of 'impact' that people question, i.e. where do we expect impact to predominantly come from. Within EA, small donors make up a 1/10 of Good Venture + FTX. To be of comparable impact, small donors need to be 10x more effective. 

Of course, we also need to consider where the baseline is. Would a 1/10 impact compared to large donors be decent enough for small donors collectively? As a MOVEMENT that does the MOST good, should we see small donors that give to AMF as impactful because they save lives per $3000 - and that's very good for the world; or unimpactful as the number of people saved through such giving is expectedly much lower than what large donors are doing? These are probably the key considerations to determine how the impact of small donors should be viewed. Just discussing the marginal impact of small donors doesn't quite do it for me.

minthin @ 2021-11-30T18:23 (+1)

Thanks for highlighting the potential of small donors to support political candidates and campaigns. In the US, if you expect to take the standard deduction on your taxes, you have a fantastic opportunity to leverage your voice in the democratic process.

Garrison @ 2021-11-29T22:16 (+1)

FYI, the link in this sentence is incorrect: This might be starting with the recent increase in funding for GiveWell.

Benjamin_Todd @ 2021-11-30T12:29 (+2)

Thanks, fixed. (https://twitter.com/ben_j_todd/status/1462882167667798021)

Yonatan Cale @ 2021-11-25T11:43 (+1)

Maybe we should collect the angel donation ideas somewhere?

AppliedDivinityStudies @ 2021-11-24T12:03 (+1)

I believe that GiveWell/OpenPhil often try to avoid providing over 50% of a charity's funding to avoid fragility / over-reliance.

Is an upshot of that view that personal small donations are effectively matched 1:1?

I.e. Suppose AMF is 50% funded by GiveWell, when I give AMF $100, I'm allowing GiveWell to give another $100 without exceeding the threshold.

Curious if anyone could corroborate this guess.

WilliamKiely @ 2021-11-24T19:23 (+10)

I believe that GiveWell/OpenPhil often try to avoid providing over 50% of a charity's funding to avoid fragility / over-reliance.

Holden Karnofsky clarified on the 80,000 Hours podcast that Open Phil merely feels nervous about funding >50% of an organizations budget (and explained why), but often does fund >50% anyway.

Is an upshot of that view that personal small donations are effectively matched 1:1?

Holden thinks that there is some multiplier there, but it's less than 1:1:

And I do think there is some kind of multiplier for people donating to organizations, there absolutely is, and that’s good. And you should donate to EA organizations if you want that multiplier. I don’t think the multiplier’s one-to-one, but I think there’s something there.

Full excerpt:

Rob Wiblin: A regular listener wrote in and was curious to know where Open Phil currently stands on its policy of not funding an individual organization too much, or not being too large a share of their total funding, because I think in the past you kind of had a rule of thumb that you were nervous about being the source of more than 50% of the revenue of a nonprofit. And this kind of meant that there was a niche where people who were earning to give could kind of effectively provide the other 50% that Open Phil was not willing to provide. What’s the status of that whole situation?

Holden Karnofsky: Well, it’s always just been a nervousness thing. I mean, I’ve seen all kinds of weird stuff on the internet that people… Games of telephone are intense. The way people can get one idea of what your policy is from hearing something from someone. So I’ve seen some weird stuff about it “Open Phil refuses to ever be more than 50%, no matter what. And this is becoming this huge bottleneck, and for every dollar you put in, it’s another dollar…” It’s like, what? No, we’re just nervous about it. We are more than 50% for a lot of EA organizations. I think it is good to not just have one funder. I think that’s an unhealthy dynamic. And I do think there is some kind of multiplier for people donating to organizations, there absolutely is, and that’s good. And you should donate to EA organizations if you want that multiplier. I don’t think the multiplier’s one-to-one, but I think there’s something there. I don’t know what other questions you have on that, but it’s a consideration.

Rob Wiblin: I mean, I think it totally makes sense that you’re reluctant to start approaching the 100% mark where an organization is completely dependent on you and they’ve formed no other relationships with potential backup supporters. They don’t have to think about the opinions of anyone other than a few people that Open Phil. That doesn’t seem super healthy.

Holden Karnofsky: Well, not only do they… I mean, it’s a lack of accountability but it’s also a lack of freedom. I think it’s an unhealthy relationship. They’re worried that if they ever piss us off, they could lose it and they haven’t built another fundraising base. They don’t know what would happen next, and that makes our relationship really not good. So it’s not preferred. It doesn’t mean we can never do it. We’re 95% sometimes.

Rob Wiblin: Yeah, it does seem like organizations should kind of reject that situation in almost any circumstance of becoming so dependent on a single funder that to some extent, they’re just… Not only is the funder a supporter, but they’re effectively managing them, or you’re going to be so nervous about their opinions that you just have to treat them as though they were a line manager. Because you know so much more about the situation than the funder probably does, otherwise they would be running the organization. But accepting that, so you’re willing to fund more than 50% of an organization’s budget in principle?

Holden Karnofsky: Yeah.

Rob Wiblin: But you get more and more reluctant as they’re approaching 100%. That does mean that there is a space there for people to be providing the gap between what you’re willing to supply and 100%. So maybe that’s potentially good news for people who wanted to take the earning to give route and were focused on longtermist organizations.

Holden Karnofsky: Yeah, and I think the reason it’s good news is the thing I said before, which is that it is good for there not to just be one dominant funder. So when you’re donating to EA organizations, you’re helping them have a more diversified funding base, you’re helping them not be only accountable to one group, and we want that to happen. And we do these fair-share calculations sometimes. So we’ll kind of estimate how much longtermist money is out there that would be kind of eligible to support a certain organization, and then we’ll pay our share based on how much of that we are. And so often that’s more like two thirds, or has been more like two thirds than 50%. Going forward it might fall a bunch. So I mean, that’s the concept. And I would say it kind of collapses into the earlier reason I gave why earning to give can be beneficial.

Benjamin_Todd @ 2021-11-24T12:39 (+5)

I think this dynamic has sometimes applied in the past.

However, Open Philanthropy are now often providing 66%, and sometimes 100%, so I didn't want to mention this as a significant benefit.

There might still be some leverage in some cases, but less than 1:1. Overall, I think a clearer way to think about this is in terms of the value of having a diversified donor base, which I mention in the final section.

Neel Nanda @ 2021-11-24T15:37 (+6)

There might still be some leverage in some cases, but less than 1:1.

If they have a rule of providing 66% of a charity's budget, surely donations are even more leveraged? $1 to the charity unlocks $2. 

Of course, this assumes that additional small donations to the charity will counter-factually unlock further donations from OpenPhil, which is making some strong assumptions about their decision-making

Benjamin_Todd @ 2021-11-24T16:23 (+4)

That's fair - the issue is there's a countervailing force in that OP might just fill 100% of their budget themselves if it seems valuable enough. My overall guess is that you probably get less than 1:1 leverage most of the time.

Ramiro @ 2021-11-23T17:22 (+1)

Plus, section 2 made me wonder if something like this would be feasible: large donors could disclose a list of potential grantees and projects they are considering for funding (and part of their respective analysis), then let small donors provide a part of the necessary amount, and then complement the rest of the necessary funding. I mean, this could arguably leverage their donations and maybe establish some information exchange between small and large donors… A bit like a VC, but maybe more widespread.

Yonatan Cale @ 2021-11-23T17:15 (+1)

You said:

The cost-effectiveness of this [small $100] grant should be basically the same as the final $1,000 that Open Philanthropy donated.

The counter argument that comes to mind (I have no idea if it's true) :

If Open Philanthropy donate to a longtermist cause, won't Open Phil donate all the money that this org needs (before it hits significant diminishing returns)? Is there still room for more funding? 

I don't know

Benjamin_Todd @ 2021-11-23T18:24 (+8)

This is the problem with the idea of 'room for funding'. There is no single amount of funding a charity 'needs'. In reality there's just a diminishing return curve. Additional donations tend to have a little less impact, but this effect is very small when we're talking about donations that are small relative to the charity's budget (if there's only one charity you want to support), or small relative to the EA community as a whole if you take a community perspective.

Ramiro @ 2021-11-23T15:51 (+1)

Thanks for the post. This has changed my mind a bit...
I'm a particularly attracted to the argument in section 2 and the "Other Benefits..." above.  What got me thinking, though, is your section 3 is sound… I think I miss (and likely other small donors) a more detailed framework to deal with informational costs.

First, I feel psychologically attracted to the idea large donors play the “angel investor” or VC role, while  small investors are often drawn to "safe portfolios" with lower variance / risk... On the other hand, the analogy shouldn't be applicable: I don’t measure my returns in philanthropy the same way I do with personal investments, and I think there's no case for something like an EMH in philanthropy, so I could deal with a risky portfolio – that’s why I’m pretty ok with donating to longtermist causes. The real problem is uncertainty: I won't regularly donate to a cause / project that I can’t quite understand, or where it's impossible to learn or observe improvements, even though it may score high in preliminary ITN-like CBA. But if there’s someone else I can trust vetting it, I can be OK with that.

Now, the case where I might have something like “private information” on the impact of a project - the "support people you know" advice - is the interesting one. A detour: this reminds me a friend of mine who, instead of using financial markets like everyone else, would provide loans to acquaintances with stable jobs and high income, and make a lot of money with that – since he could sidestep the information asymmetry plaguing banks. But, eventually, a friend would default, and now he had some trouble collecting the money… it was no tragedy, but he realized he’d neglected social costs, biases, and that he wasn’t so great at screening… Thus I imagine that, if I wanted to fund a grant to a skilled independent researcher I know, or to a new EA group, I’d be in an analogous situation. Thus, even if I were pretty confident these projects are great and underfunded, I’d still want some sort of professional external opinion vetting it – maybe even want to totally outsource this decision, so avoiding the social cost of having to discontinue funding if the evidence ends up requiring it. And, of course, this kind of applies to personal projects - even if you know better than anyone else what you could do, you could be particularly bad at deciding when to stop.


I think there could be some way to solve / mitigate this issue - maybe having a group of small donors interested in providing advice, or funding each other's "support people you know" projects, so you could have an external opinion on it, dillute and cap risks, and have an excuse to cut the funding... But that's just what popped in my brain now.