More funding is really good

By Zach Stein-Perlman @ 2022-06-25T22:00 (+46)

Inspired by Matt Yglesias[1]

Let's say money is part of "EA funding" if the person/system directing it is roughly aiming at doing as much good as possible and considering options like these. Then marginal EA funding goes to interventions that the person/system directing it believes are at least as good as interventions like these. These interventions are really good. Therefore marginal EA funding is prima facie really good.

As long as there exist cost-effective interventions to throw money at, EA is funding constrained.

  1. ^

    "It feels like there are three pieces per week on EA Forum with the thesis that an increase in EA funding could be counterintuitively bad and nobody ever [writes] a post with the boring but more correct-sounding thesis that it’s good. I guess my slightly spicy EA take is that there's too much complacency about not being funding constrained, and it would actually be really useful to raise dramatically more money."

  2. ^

    Actually, I can't find GiveWell's marginal cost-effectiveness estimates, but my sense is that they've found interventions with average cost-effectiveness better than $4,500 per child saved and that scale without much cost increase. [Update: see comments.]

  3. ^

    Open Philanthropy's last dollar project.

  4. ^

    Assuming you can get at least 8% expected real returns per year, and the wealth of the rest of the world grows at at most 4%, and influence is proportional to your share of the world's wealth.


Larks @ 2022-06-25T23:05 (+4)

my sense is that they've found interventions with average cost-effectiveness better than $4,500 per child saved and that scale without much cost increase.

 

Seems plausible, but on the other hand GiveWell decided to hold on to money instead of spending it immediately, apparently because of local scaling limits:

In 2021, we may need to direct as much as $560 million. While we have an excellent team of 22 researchers working on this full time, we haven’t been able to hire quickly enough to match our incredible growth in funds raised.

This year, we expect to identify $400 million in 8x or better opportunities. If our fundraising projections hold, we may have $160 million (or more) that we’re unable to spend at our current bar.

Luke Freeman @ 2022-06-27T03:27 (+10)

FWIW: GiveWell actually already had some opportunities in the pipeline that they were still working on (e.g. Dispensers for Safe Water). Given the funding needs of their top charities right now it's looking very likely they'll have more room for funding than they can fill this year (unless there's unprecedented growth which seems unlikely given current projected economic conditions). At the GiveDirectly bar of funding (10%-30% as cost effective) there's nowhere near enough funding for the foreseeable future.

Zach Stein-Perlman @ 2022-07-05T22:13 (+9)

(Update: yup)

Zach Stein-Perlman @ 2022-06-25T23:13 (+2)

Yeah, I don't know much about this; if someone has a good justification for the marginal cost-effectiveness of global health & development interventions, I'd love to see it.

Zach Stein-Perlman @ 2022-07-15T22:30 (+2)

Update: GiveWell funds some interventions at more like $10K/life, which naively suggests that marginal cost per life is about $10K, but maybe those interventions had side effects of gaining information or enabling other interventions in the future and so had greater all-things considered effectiveness.

Zach Stein-Perlman @ 2022-07-26T15:50 (+2)

And:

That would really, really help us make AI go well. Until we can do that, more funding is astronomically valuable. (And $10T is more than 100 times what EA has.)

Question Mark @ 2022-06-25T23:10 (+1)

Do you know of any estimates of the impact of more funding for AI safety? For instance, how much would an additional $1,000 increase the odds of the AI control problem being solved?

Zach Stein-Perlman @ 2022-06-25T23:13 (+7)

I don't know of particular estimates. I do know that different (smart, reasonable, well-informed) people would give very different answers -- at least one would even say that the marginal AI safety researcher has negative expected value.

Personally, I'm optimistic that even if you're skeptical of AI safety research in general, you can get positive expected value by (as a lower bound) doing something like giving particular researchers whose judgment you trust money to support researchers they think are promising.

My guess is that the typical AI-concerned community leader would say at least a one in 10 billion chance for $1,000.