What it's like to be an AI safety grantmaker (and why we need more of them)

By JulianHazell @ 2026-03-30T18:15 (+47)

This is a linkpost to https://open.substack.com/pub/secretthirdthingai/p/what-its-like-to-be-an-ai-safety

TL;DR

Here are the key points I want you to take away from this post:

  1. There are maybe 30 to 60 people in the world doing AI safety grantmaking, collectively directing hundreds of millions of dollars a year. Soon, there will be >$1B being directed per year, and potentially multiple billions.
  2. AI safety grantmaking orgs like CG have a strong track record of counterfactually seeding impactful organizations and careers.
  3. Grantmaking involves a lot more than evaluating a stack of inbound proposals. You also proactively generate new grants (e.g., headhunting founders, designing new funding programs), provide strategic advice to grantees, write memos that shape funding strategy, and generally serve as connective tissue in the ecosystem.
  4. The AI safety grantmaking ecosystem is currently leaving good grant opportunities on the table due to a lack of grantmaker capacity. This is bad.
    1. More grantmakers would also unlock more capital, because funders are more willing to write cheques when there are people who can find and vet promising opportunities.
  5. Everyone who reads this should not necessarily rush to become a grantmaker. Direct work is great, and the entire ecosystem is talent-starved in so many ways. But my sense is that grantmaking is underrated relative to other paths that high-context AI safety people tend to consider, like research or policy.
  6. Grantmaking also has some real downsides — you won’t go as deep as you might want to, the work is largely invisible, active grantmaking can be frustratingly poorly scoped, and saying no to people is hard. I discuss these in the appendix.

Intro

A few weeks ago, I wrapped up my two and a half year stint as a grantmaker on the AI governance and policy team at Coefficient Giving (or “CG”). I’ll soon be joining Astralis Foundation to work on their grantmaking strategy.

CG was my first real, full-time, big boy AI safety job after finishing grad school. The EA part of me wishes I could tell a story where I sat cross-legged in an ivory tower, thinking (mostly from first principles, of course) about how I can most reduce existential risk from ASI, whereupon I decided grantmaking was the most impactful path to pursue.

Nope. I took this job for reasons like:

Fortunately, I ended up quickly concluding that grantmaking is a very high-leverage role in the AI safety ecosystem. Thus my main goals here are to (a) attempt to demystify what grantmakers do and (b) make the case that grantmaking is being underrated as a career opportunity by high-context AI safety people.

I’ll also try to address some common misconceptions about things like the marginal value of more grantmakers, mention some downsides of the role, and outline a basic call to action.

What do grantmakers do?

Grantmaking mostly involves three key activities: (1) evaluating inbound grant proposals (or “passive grantmaking”), (2) proactively generating new grants (or “active grantmaking”), and (3) a grab-bag of non-grantmaking activities.

I’ll describe each of these below in more detail, but the basic idea is that grantmakers are in the business of figuring out what the AI safety ecosystem needs and then taking advantage of the biggest levers available to them to make it happen.1

Passive grantmaking

This is what most people picture when they think about what grantmakers do. Someone comes to you with a proposal, you read it, conduct an investigation, and if you think it’s worth funding, write up a recommendation that senior people at your organization can get behind.

A common misconception about passive grantmaking is that it basically just involves hitting accept or reject. That’s false: you have a lot of levers at your disposal to shape an inbound grant into something even better. You can give the applicant feedback on their theory of change/plans/strategy, ask them for a budget that scales up one workstream and scales down another, lengthen or shorten the grant period, make the second half of the grant conditional on hitting certain milestones, suggest they hire for a role they hadn’t considered, and/or push them to be even more ambitious.

Active grantmaking

There’s a second “style” of grantmaking called active grantmaking.

Instead of waiting for exciting proposals to land on your desk, you go out and actually make things happen. For instance, you could write up a project proposal for a new organization focused on a sub-problem you think is important and pitch a number of potential founders to start it. You could also design and advertise a new funding program from scratch (e.g., an RFP or something like CG’s CDTF program), and/or pitch an existing grantee to start a new workstream.

Active grantmaking requires you to develop models of what to prioritize. You have to form views on questions like:

You develop these views through a mix of reading (papers, memos, blog posts, Slack messages), talking to a lot of people (researchers, founders, policy experts, other grantmakers), and occasionally just sitting with a hard question for a while.

That said, even at an organization that’s been grantmaking for a decade, there are a surprising number of important areas that few people have spent much time digging into. Even just a week or so of shallowly investigating an area that had been on the team’s radar but never properly investigated can surface genuinely exciting opportunities. Every organization has blindspots, and sometimes the highest-value thing a new grantmaker can do is simply be the first person to take a serious look at something that seems vaguely promising.

Non-grantmaking activities

A surprising amount of the job doesn’t involve making grants directly. As a grantmaker, you can easily spend a decent chunk of your time on high-value things like:

Throughout my time at CG, I’d guess I spent a third of my time on non-grantmaking activities.2

Why I think grantmaking is underratedly impactful

Grantmaking has a strong track record

During my time at CG, I saw first-hand how a number of small, speculative grants made years ago helped create organizations that are now pillars in the AI safety ecosystem.

Alexander Berger (CG’s CEO) recently shared an example of this:

“Many of the grantees that have gone on to be among our most important and impactful didn’t start off looking that way at all. For instance, we made our first $250,000 grant to the program that would eventually become ML Alignment & Theory Scholars (MATS) in 2019, when it was a side project by some students affiliated with the Stanford Existential Risks Initiative who thought there should be a summer program to prepare software engineers for careers in AI safety. The MATS 1.0 cohort had 5 fellows and no permanent full-time staff. They have since expanded to run multiple cohorts a year of around 100 scholars with an admission rate of 4-7%, and report that over 80% of their alumni are now working full-time in AI safety and security (accounting for a meaningful portion of safety staff at some of the biggest companies and government institutes).”

There are more examples in this piece that CG published back in October 2025. I also like this anecdote about how Jake Mendel encouraged the folks at Theorem to be even more ambitious with their plans, and this post Asya Bergal wrote about CG’s capacity building efforts.3

Unfortunately, many of the most impressive wins I’m familiar with are fairly sensitive, so I kinda just have to unsatisfyingly gesture at a couple of fairly well-known examples and say “trust me bro”. If you’re seriously considering a grantmaking career and this is a crux for you, my advice would be to ask for more evidence directly from grantmakers you speak with. Maybe they’ll have a few less obvious examples they can share.

The ratio of AI safety philanthropic capital to grantmakers is kinda wild

Here’s something that I think people really don’t appreciate: there are maybe 30-60 FTE in the world doing the object-level work of investigating and recommending AI safety grants.4

These people collectively directed hundreds of millions of dollars a year in 2025. In 2026, I expect this number to be greater than a billion, with potentially enormous growth coming in the next few years as AI safety issues grow in urgency and salience. Depending on how you do the math, you’re looking at each grantmaker being responsible for directing tens of millions of dollars per year. That’s an extraordinary amount of leverage.

Of course, basically the entire AI safety ecosystem is talent-starved, so these anecdotes can’t fully carry the argument I’m trying to make. But still, my intuition is that grantmaking is underrated relative to other popular talent-starved roles. If you’re a high-context AI safety person deciding between, say, working as a researcher at a think tank or becoming a grantmaker, I think the grantmaker path deserves more weight than I sense many people give it. This seems especially true if you’re someone with technical AI safety chops who is mostly considering technical research roles.

Grantmaking on current margins looks pretty solid

Like a good grantmaker, you should think on the margin. You might reasonably be wondering something like: “Aren’t the most obvious grants going to get funded either way? Are more grantmakers on the margin really going to make a significant difference to what gets funded?”.

I think the answer is pretty clearly yes, for a few reasons.

We’re leaving good grants on the table right now due to a lack of grantmakers. When I was at CG, I regularly saw plausibly-above-the-bar proposals either get rejected outright or sit in the queue longer than they should have, mostly because we didn’t have enough grantmaker capacity to properly evaluate them. CG’s AI governance RFP was recently paused in part because they want to reallocate staff capacity toward more active grantmaking. On the active grantmaking side, there was a regular stream of potentially promising ideas that never got seriously explored because we never had enough staff capacity.

This could get even worse if philanthropic capital grows but grantmaker hiring remains slow. I’m seriously worried that we’re not on track to deploy all of the philanthropic capital that could go toward good AI safety opportunities over the next few years.

More grantmakers would unlock more capital. More grantmaker capacity doesn’t just divide the existing pie into smaller slices; it makes the pie bigger, because funders will be more willing to write cheques if there are more skilled grantmakers who can actually find and vet promising opportunities.

Grantmakers do a lot more than filter through marginal proposals. As I touched on above, there’s a common misconception that the job is just sorting through a pile of applications and deciding which ones to say yes or no to. That’s not true. You can go out and seize the opportunities you wish to see in this world, especially in sub-areas where we are not yet seeing strong diminishing returns. This can be an even bigger deal if you have specific domain expertise that uniquely enables you to do a specific flavour of active grantmaking (e.g., if you’re someone with an information security background).

Jake Mendel on CG’s technical AI safety team recently wrote about this:

“Some people think that being a grantmaker at Coefficient means sorting through a big pile of grant proposals and deciding which ones to say yes and no to. As a result, they think that the only impact at stake is how good our decisions are about marginal grants, since all the excellent grants are no-brainers.

But grantmakers don’t just evaluate proposals; we elicit them. I spend the majority of my time trying to figure out how to get better proposals into our pipeline: writing RFPs that describe the research projects we want to fund, or pitching promising researchers on AI safety research agendas, or steering applicants to better-targeted or more ambitious proposals.”

I’d also push back on the idea that the “obviously above the bar” grants are actually obvious. They might be obvious5 to a full-time grantmaker who has spent months embedded in a particular sub-area, but not at all obvious to the people who approve grants — say, the CEO of a grantmaking organization who has to juggle many different responsibilities. A big part of your job as a grantmaker is to internally translate and advocate for the good stuff to people who don’t have the time or context to investigate it themselves.

I could one day imagine a world where money or ideas are the bottleneck, but we are currently far from that world.

Grantmaking vs direct work

To be clear, I’m not saying everyone should drop what they’re doing and try to become grantmakers. Direct work is great! The majority of people in the AI safety ecosystem should absolutely be doing things like research, advocacy, communications, policy, or founding organizations rather than trying to become grantmakers.6

The claim I can more confidently stand by is that grantmaking currently seems quite underrated by high-context AI safety people. After running hiring rounds, pitching a ton of people on applying, and watching folks’ career moves play out, my sense is that there’s a meaningful gap between how excited people are about grantmaking and how excited I think they should be. I suspect this is partly due to misconceptions about the role (hopefully addressed above) and also that grantmaking is just kind of an opaque career path.

As an exercise, try BOTECing what you could make happen with $10 to $30 million7 in grantmaking funds and a year to brainstorm new project ideas, vet potential founders, and launch new RFPs. Even if you include some counterfactuality haircuts, that’s enough to fund a large number of people to go work on problems you think are important. Then compare that to what you’d counterfactually produce as a single researcher or policy professional over the same period. I’m not saying the answer is always obvious, or that this is a bulletproof argument in favour of grantmaking, but I think it’s worth trying to be concrete about it.

Call to action

Start by thinking about whether you’d be a fit for a grantmaking role.

You might be a good fit for a grantmaker role if:

That said, I want to be clear that grantmakers come from all kinds of backgrounds. I wouldn’t over-index on whether you check every box above. If what I’ve described in this post sounds interesting, talk to some grantmakers, and seriously consider just applying. You’ll learn a lot about the role from the process itself even if it doesn’t work out. I did a lot of hiring at CG, and while these rounds are very competitive, I would’ve loved to see even more high-context AI safety people apply.

If you want to pursue this, note that there are several organizations that are worth keeping on your radar (or maybe even proactively reaching out to). These include (but are not limited to):

There are also opportunities to do part-time grantmaking work at places like the Survival and Flourishing Fund. You could also do independent grantmaking or set up your own new thing, which seems like a great option if you’re particularly entrepreneurial and if you can secure funding for it.

Acknowledgements: Thank you to Catherine Brewer, Michael Townsend, and Trevor Levin for their helpful comments. All views expressed here are my own and do not necessarily reflect any other organizations or individuals I’m affiliated with.

Appendix - Things that aren’t great about grantmaking

In the interest of not writing a pure sales pitch, here are some things I think are genuine downsides of being a grantmaker.

You might not go as deep on the object-level as you might want to. I’d guess there’s a fairly strong correlation between people who are bought into AI safety and people who intrinsically love forming deep, rich inside views on specific questions. Grantmaking isn’t really set up for that. As I described above, you’ll spend some of your time developing views, and you might have one or two focus areas you know particularly well. But generally speaking your mandate will be pretty broad and you’ll have to defer a decent amount. If what you really want is to spend six months going deep on a single research question, grantmaking is probably not the right fit for you.

The work is somewhat invisible. If you make a great grant, your broader network of peers will not obviously know about it. There’s no public artifact to point to. Research, for instance, has a built-in status mechanism — you produce something legible that people can evaluate and credit you for. Grantmaking doesn’t really have that. Of course, you do get some status from people correctly perceiving that grantmakers are important tastemakers in the ecosystem, but the actual work is largely behind the scenes.

People will interact with you differently because you can direct money. You have to be somewhat wary of people trying to bamboozle you. In practice, this was far less of an issue than I expected going in, as the vast majority of people I interacted with were relatively honest and well-intentioned. But there are grifters out there, and developing a nose for this is part of the job.

Active grantmaking can be really tricky. The most entrepreneurial parts of grantmaking are often very poorly scoped. If you aren’t an intense self-starter, it can be easy to spin your wheels in the mud. This can be some of the most rewarding work that a grantmaker does, but also some of the hardest.

You sometimes can’t fund things you think are good. Depending on where you work, there may be constraints on what you can fund. Let me stress an obvious point: it is incredibly important as a grantmaker to be a faithful and responsible steward of your funders’ capital. And sometimes they’ll have firm preferences against funding things you’d otherwise want to support, or there might be other organizational constraints that get in your way. That’s just the way it is.8

Saying no is hard. It kinda sucks to say no to someone who is really passionate about their idea, but that’s part of the job. This is especially true when the main reason you’re saying no is because of bandwidth constraints rather than their proposal being below your bar.

Communicating can be quite effortful. You have to be pretty careful about how you communicate certain things to people due to power dynamics, which requires extra mental bandwidth. A poorly worded email from a grantmaker can carry more weight than you intend.

1 As you might imagine, the biggest lever available is often philanthropic capital. But sometimes it can be your network or your particular object-level knowledge.

2 Of course, other grantmakers might have vastly different experiences with this.

3 I'm drawing mostly on CG examples here because that's what I know best, not because CG is the only funder with wins like these. My sense is that other grantmaking orgs have similar stories to tell.

4 This was just a 20 minute low-confidence estimate I put together as of March 2026. If you expand the criteria to include program leadership, advisory roles, and people in grantmaking-adjacent positions, you get to maybe 70-90.

5 Even then, I think people overestimate how obvious these are!

6 One example of something that I think is probably even more neglected than grantmaking is founding and scaling highly ambitious organizations. But even that’s not clear-cut. There are some founders who wouldn’t be good grantmakers, sure, but if you’re someone who could either start a new org or join a grantmaking org like CG at a senior level, it might be kind of a close call.

7 This depends on factors such as what organization you work at, what area(s) you focus on, and your level of seniority.

8 In practice, I didn’t feel like this was a huge issue for me during my time CG. I know others who have had much bigger issues with this though, so YMMV.


Austin @ 2026-03-31T16:55 (+10)

There are maybe 30 to 60 people in the world doing AI safety grantmaking, collectively directing hundreds of millions of dollars a year. Soon, there will be >$1B being directed per year, and potentially multiple billions.

 

I like this framing for the botecs it encourages!

Currently it seems like each grantmaker is (on average) responsible for ~$10m/y. One question I think about sometimes: how will # of grantmakers scale as more $ go towards AI safety funding? If funding is eg 3x'ing year-over-year, it's unclear whether we're currently training up that number of grantmakers.

Another question might be: what is a good ratio of # of grantmakers to # of direct work? I'd ballpark there to be ~1000 fulltime AIS direct workers; does a 20:1 ratio seem high, low, or just right?

I'd be curious to look at comparisons for scaled funding ecosystems for a reference class; I'm primarily thinking VCs & angels, but perhaps others eg academic funding are also appropriate.

JulianHazell @ 2026-04-01T18:59 (+6)

Currently it seems like each grantmaker is (on average) responsible for ~$10m/y. One question I think about sometimes: how will # of grantmakers scale as more $ go towards AI safety funding? If funding is eg 3x'ing year-over-year, it's unclear whether we're currently training up that number of grantmakers.

My vibes-based sense is that at least currently, the amount of philanthropic capital that could go towards AI safety projects is growing quite a bit faster than the number of grantmakers. I'm pretty worried about this.

Taking a look at CG:

  • Per this, the number of GCR program staff at CG only grew 2x from 2019 to 2022.
  • Quickly looking at archives of the team list from EOTY 2022 and EOTY 2025, it looks like the growth rate of program staff over that period was roughly 2 to 2.5x.

Another question might be: what is a good ratio of # of grantmakers to # of direct work? I'd ballpark there to be ~1000 fulltime AIS direct workers; does a 20:1 ratio seem high, low, or just right?

I think the ratio framing is a bit tricky and depends on a lot of other variables (for instance, how mature the field is, how many promising ideas are floating around the memesphere, how good AIs are at doing direct work, how much philanthropic capital there is, etc). The other thing is that the number of direct workers is itself downstream of grantmaker capacity.

Tony Senanayake @ 2026-03-31T02:01 (+7)

Thanks for sharing this post. I appreciated the honest behind-the-scenes look at what is involved in being a grant maker. 

As someone not working in the AI safety space, I'm intrigued by your opinions as to what extent grant making within AI safety is similar to and different from grant making within other cause priority areas, for example animal advocacy and global development and health?

My sense from reading the post is that those areas may be relatively less neglected, with fewer opportunities for identifying opportunities with outsized impact returns on investment. Do you think that is a reasonable assumption to be making? 

JulianHazell @ 2026-04-01T19:11 (+4)

Thanks for the comment!

As someone not working in the AI safety space, I'm intrigued by your opinions as to what extent grant making within AI safety is similar to and different from grant making within other cause priority areas, for example animal advocacy and global development and health?

It's hard for me to say about what these differences look like outside of CG, but one thing that comes to mind is that GHW and animal welfare grantmaking is more based on quantitative modelling and BOTECs (though sometimes we use BOTECs on the GCR side of things).

My sense from reading the post is that those areas may be relatively less neglected, with fewer opportunities for identifying opportunities with outsized impact returns on investment. Do you think that is a reasonable assumption to be making? 

It depends on how you define "neglected". Like, in terms of EA focus and talent, they're probably more neglected than AI safety/catastrophic risks. In terms of total $ spent by society at large, GHW is far less neglected than AI safety, which is in turn far less neglected than FAW.

This is kind of a lame answer, but whether AI safety has more or fewer outsized ROI opportunities really depends on your worldview. IMO, both spaces have a ton of opportunity. If I woke up tomorrow and decided that AI safety was no longer important (or I didn't buy that worldview anymore), I'd be extremely excited about the vast number of opportunities to make global health and farmed animal welfare better.

Neel Nanda @ 2026-04-01T02:35 (+4)

If you’re a high-context AI safety person deciding between, say, working as a researcher at a think tank or becoming a grantmaker, I think the grantmaker path deserves more weight than I sense many people give it. This seems especially true if you’re someone with technical AI safety chops who is mostly considering technical research roles.

Seems true to me. I'd love to see more talented grant makers out there!