AI safety is extremely bottlenecked on grantmakers
By lukeprog @ 2026-05-13T18:42 (+80)
Last month, Anthropic announced Mythos Preview, the most powerful cyberweapon in history, capable of finding and exploiting zero-day vulnerabilities in every major operating system and web browser. Meanwhile, many frontier AI company employees increasingly expect full automation of AI R&D in the next year or two, followed by the rapid automation of thousands of other important tasks and jobs.
This pace of technological change is unprecedented, and the world is not prepared. Very little of the commercial, government, and nonprofit infrastructure we need to respond to these transformative changes has been built.
To meet this challenge, dozens of philanthropists are hoping to deploy tens of billions of dollars in philanthropy and impact investments in AI safety and governance in the next several years alone.[1] But most of this capital is bottlenecked on a tiny number of grant and investment advisors who can identify and vet specific funding opportunities, and create new ones by headhunting project founders.
That's why the AI teams at Coefficient Giving (CG) are hiring grantmakers and senior generalists, and why I think the next people we hire will be among the highest-leverage people in AI safety.[2] Please apply here.
As a new AI grantmaker at CG,[3] you'd likely move >$30 million, and plausibly >$100 million, in your first year, funding dozens or hundreds of people to work full-time on projects we think will address catastrophic risks from AI. Because grant investigation capacity is tight, hiring one fewer grantmaker usually means those millions will just sit in an account for another year rather than being deployed to useful ends. And when a strong candidate turns down a CG offer, the result is often not “a slightly-less-good grantmaker," it’s just one fewer grantmaker. We routinely close rounds with fewer hires than we'd planned for.
We fund a mix of:
- proposals that come our way via a Request for Proposals or otherwise, often with some creative steering and reshaping by the investigator
- renewals of past grantees, with a special focus on ambitiously scaling-up the best performers
- strategy-driven creation of new grantees. We do this by (a) identifying a critical gap in the ecosystem, (b) headhunting a strong founder for a new project that would address the gap, and (c) helping them spin up the new project quickly and ambitiously. There are dozens of new projects we think need to be spun up, e.g. (i) a high-credibility AI company scorecard project, (ii) projects to build and advocate better chain of thought monitoring or agreement verification technology, additional specialized third-party auditors, and many more.)
As our AI timelines shorten, we've shifted more focus to (3) since many critical gaps remain that we haven't gotten good applications for. We've had strong success with this so far, but the strategy work and headhunting of (3) requires far more staff capacity per dollar moved than (1) or (2) do, so we need to grow our grantmaker capacity as quickly as we can.[4] (Also, to make this shift we had to close this RFP, but we'd rather have the staff capacity to do both!)
CG is an excellent place to do this work, because we have (among other things):
- Resources. We expect to move in the neighborhood of $1 billion in AI grantmaking from Good Ventures (our primary funding partner) in 2026, plus more from dozens of other AI safety funders we are advising, some of which have billions in philanthropic capacity.
- Experience. Our staff have more AI safety grantmaking experience than anyone else. We've made hundreds of AI grants since 2015, and we benefit from over a decade of learning via (a) watching what impact those grants did or didn't have, and (b) special funder access to private information about grantees and grantee impacts.
- Strong colleagues. I won't belabor this, but CG is a talent-dense organization full of thoughtful, capable, and deeply kind people, all of whom are working toward common goals.
Please apply here, and help address a key bottleneck to helping the world prepare for the arrival of transformative AI. We recently extended the application deadline to May 24 due to insufficient applications, so your application could really change how many people we are able to hire!
This post is written from an AI team's perspective. CG's Biosecurity & Pandemic Preparedness team is also hiring, but I'll let people closer to that work speak to it. See e.g. here. ↩︎
For the rest of this post I'll focus on grantmaking rather than grantmaking and impact investing, since CG advises more grants than impact investments. ↩︎
Available founders are another bottleneck for (3), but grantmaker capacity can be converted into additional founders by spending more time on founder search, and much of our success with (3) so far has come from finding people outside our immediate networks who have been successful at building large new grantees addressing critical gaps. ↩︎
Marcus Abramovitch 🔸 @ 2026-05-15T02:39 (+36)
I've seen a lot of posts that we need a lot more AI safety grantmakers. I feel like I want to do a bit of rough math and just see if that's the case. There is this estimate for the number of FTEs in AI safety by Stephen Mcaleese from Sept 2025 and 2022. Let's extrapolate exponential growth and say there are ~1400 FTEs on AI safety right now. Let's also assume from Julian Hazell's post that there are ~50 full-time AI safety grantmakers (though I think it's probably a bit more than that, given CG, Astralis, Astera, Longview, SFF, independent grantmakers, FLI, UK AISI, ARIA, AISTOF, Navigation Fundpeople at Schmidt, Macroscopic, etc., LTFF, Bluedot grants, Manifund, Tarbell, etc.).
From what I know about CG and other grantmakers, the people there are quite talented, and I would speculate are more talented than the average grantee.
Right off the bat, that means that right now, about 28 FTEs are working in AIS per grantmaker. Not to mention, a lot of the people who work full-time in AIS are working at frontier labs or other for-profit companies like Goodfire or in government (like UK AISI, CAISI), who don't need grantmakers to evaluate/fund their work. But we can ignore all those and just stick with the 28 FTE number.
I think I would expect the average grantmaker to be able to handle more than that, especially since an average organization usually has ~10 FTEs on average (I just asked Claude), and I expect a typical grantmaker to handle much more than 3 grants.
Also, I suspect a lot of grants look a lot more like renewals, and so don't need nearly as much review. For example, I'd expect grants to MATS and Redwood to look a lot more like reviewing their plans and signing off on them.
What am I missing?
cb @ 2026-05-15T16:02 (+11)
(I work as a grantmaker at CG, but I’m speaking for myself not Luke here)
- FWIW, I’d expect ~1400 FTE to be an underestimate for “how many FTE work on AI governance/policy or technical AI safety”.
- I don’t have a more up-to-date estimate, sorry. I think Steven’s estimates were off in 2025 (e.g. undercounting lab staff, missing researchers in academia doing AIS-relevant work), and I think his model is weird, so I don’t really trust estimates based on it.
- I think that the ratio of grantmakers:people in AIS isn’t that informative for answering the question, “is the number of grantmakers a bottleneck”. (The better ratio is presumably something like, “people who could be working in AI safety/governance if they got funding: grantmakers”).
- Lots of our grantmaking, and especially strategy-driven creation of new grantees (Luke’s #3 item on what we fund), pulls talent into AI safety/AI governance, increasing the number of FTEs doing direct work. This is one of our top priority lines of work.
- We currently feel more constrained by evaluating or creating new funding opportunities than directing $ to them.
- Some kinds of grantmaking are very time-intensive, especially strategy-driven creation of new grantees.
- Some renewals can be (and are!) made very quickly, but in other cases we think it’s useful for grantmakers to spend longer working with grantees.
- Grantmakers can give feedback on grantees’ plans, encourage them to expand into new areas or be more ambitious, and help new initiatives go better (e.g., by helping them make key hires, advising on object-level decisions, having conversations about their strategy, and connecting them to relevant people). So renewals aren’t merely a passive “accept/reject” process.
- One example of this kind of work is my colleague Abbey working closely with Heron AI Security on a shortlist of pressing problems in infosecurity, and Heron’s new programs addressing this.
Jo_🔸 @ 2026-05-15T10:30 (+5)
I appreciate you raising this and I'm interested to see how people will answer. However, some weak counter-considerations come to mind:
- Many grants go to individual independent researchers for relatively short periods of time, requiring a much higher amount of grantmaking time per FTE than other areas (say, global health) - I realize that this is perhaps not that strong of a consideration, given that many grantmakers mainly fund organizations
- Grantmakers want the field to keep growing quickly, and I assume "creating new grantees" is time-consuming. That means that the current ratio of grantmakers to AI safety FTEs is maybe not the best metric
David T @ 2026-05-16T10:38 (+4)
If accurate, that ratio of grantmakers to employed specialists looks rather low compared with what I understand it to be in many other fields, and I'm thinking of fields like space technology which have 75 page grant applications requiring specialist knowledge to evaluate and monitor, and government subsidy programmes whose application volume is sufficiently high to have <5% funding rates and which have painful audit requirements.
Also wonder how much EA organizations use part time external reviewers to evaluate grants, which is the standard way of broadening evaluations and removing bottlenecks? (although I can see getting AI specialists who both work in industry/research and are truly independent might be more challenging)
Robi Rahman🔸 @ 2026-05-13T21:51 (+8)
And when a strong candidate turns down a CG offer, the result is often not “a slightly-less-good grantmaker," it’s just one fewer grantmaker. We routinely close rounds with fewer hires than we'd planned for.
Why? Shouldn't you make an offer to the runner-up?
lukeprog @ 2026-05-13T23:06 (+10)
In some cases we do, but we have a high bar for overall "fit for the role," we don't hire people below that bar, and so we often end a hiring round with too-few applicants ending up above our bar (as far we can assess at that time with limited investment by both us and the candidate). We maintain this high bar for several reasons, one of which is that managers' time is also scarce and high opportunity cost.
Sophie Kim @ 2026-05-14T20:42 (+5)
Strong upvoted! Great post, definitely agree more people should consider transitioning into grantmaking. Especially since research is so power-law distributed, I think many current technical / governance researchers would have much higher counterfactual impact deploying tens of millions of dollars as opposed to e.g. writing another paper. Downstream of a similar post I wrote, I'm currently working on a project to address the grantmaker bottleneck. Would be keen to connect! Have DM'd.
Also, for any grantmakers reading this, please reach out to me if you're interested in e.g. helping create a BlueDot AI Safety Grantmaking Fundamentals course curriculum or doing mentorship!