The Torrent Is Coming. Here Is Why Most of It Will Flow the Wrong Way.

By Rubies93 @ 2026-04-23T15:35 (–2)

 There is a particular irony in watching a field dedicated to reducing existential risk quietly reproduce the same patterns of exclusion it claims to be solving.

AI safety and governance funding is about to grow dramatically. Between new philanthropic vehicles, the Anthropic and OpenAI tender processes and shortening timelines to transformative AI, more money is entering this space than ever before. That is genuinely good news. But if we are not intentional about how it moves, this flood of funding risks making existing blind spots bigger rather than fixing them.

The question is not whether the money will arrive. It will. The question is whether the right systems exist to direct it well — and right now, they are not yet ready for what is coming.

 

The default will be to fund the already funded

When money moves fast, it follows familiar paths. It flows toward organisations with established reputations, researchers with institutional homes and people who already have access to the right rooms. This is not deliberate exclusion. It is simply how capital behaves when there are no intentional systems in place to direct it otherwise.

The challenge is that those familiar paths in AI governance currently run through a relatively small number of institutions concentrated in a handful of cities. Meanwhile the technology being governed will touch billions of people living far outside those cities — in Lagos, Nairobi, Jakarta, São Paulo and countless other places where AI systems are already reshaping daily life, often without adequate frameworks to manage the consequences.

There is a growing group of practitioners in these regions who are building AI governance capacity without institutional backing, without salaries and without access to the networks that open funding doors. They are drafting governance frameworks, running AI literacy programmes, contributing to global policy conversations and publishing work that genuinely adds to the field.

They are doing the work. They are just not getting funded for it.

 

Why current funding systems miss them

EA and longtermist funding was built around a specific type of applicant. Someone with a publication record. Someone attached to a recognised institution. Someone whose credibility is already visible within the existing ecosystem.

This design is not intentionally exclusive — but it consistently advantages people who had access to the institutions that produce those credentials. The result is a funding landscape where the real question being asked is not always "is this person doing valuable work?" but rather "can we verify this person through channels we already trust?" These are not the same question. Treating them as if they are is how capable, committed practitioners get filtered out — not because their contributions lack value but because that value has not yet appeared in the formats the system is built to recognise.

Consider the scale of what is coming. There are currently somewhere between 30 and 60 people globally doing serious AI safety grant evaluation — directing what will soon become tens of billions of dollars. That is not just a capacity problem. It is a pipeline problem. And pipelines, by definition, only draw from where they are pointed. If the people evaluating grants all come from the same institutions, share the same networks and recognise talent through the same filters, more money will not produce more diversity of thought. It will produce more of the same, faster.

The coming funding torrent will not fix this on its own. Without deliberate redesign it will simply amplify the existing pattern at greater scale.

 

What better systems would look like

This moment is a genuine opportunity to rebuild rather than simply expand. Here is what more effective infrastructure could look like in practice.

First — evaluate the work, not just the credentials. A governance document drafted for an ethics organisation, an essay published in a public forum, a webinar series reaching practitioners in underserved regions — these are real, assessable outputs. The infrastructure to evaluate them directly just needs to exist and be consistently used, rather than defaulting to institutional proxies as a shortcut for quality.

Second — treat geographic diversity as strategy, not goodwill. AI systems will operate globally. The frameworks that govern them will be stronger — not just more equitable but genuinely more robust — if they are shaped by people who understand different regulatory environments, cultural contexts and real-world failure modes. A governance framework tested only against the assumptions of one region is a framework with blind spots. Funding diversity is not an act of charity. It is just good thinking.

Third — build bridges, not gatekeepers. The current system asks emerging practitioners to meet funders where they are — to learn the language, adopt the formats and navigate processes that established institutions are comfortable with. A smarter system inverts this. It actively sends people into underrepresented regions and communities to identify talent, understand the work being done and bring it into view. Scouting is how sports, venture capital and the arts find exceptional people outside traditional pipelines. There is no reason AI governance funding cannot do the same.

 

The specific risk of a funding flood

Large, fast-moving capital has a known failure mode. It generates activity that looks like progress without necessarily producing it. When money becomes abundant, the incentive quietly shifts — from doing the most valuable work to being visible to the people writing the cheques. Organisations optimise for fundability. Individuals optimise for recognition in the right channels. The field fills with well-resourced projects that reinforce what already exists rather than building what is genuinely missing.

The solution is not less funding. It is smarter infrastructure for directing it — systems that can find and evaluate talent wherever it exists, move quickly enough to support emerging practitioners before they burn out or disengage, and actively resist the gravitational pull toward the already-established.

The people who will help solve the governance challenges of advanced AI are not all in the places we expect to find them. Some of them are already doing the work — quietly, without funding, without institutional cover — simply because they believe it matters. 

The torrent is coming. The question is whether it will reach them.

This essay was developed with AI assistance (Claude, Anthropic).