In AI Governance, let the Non-EA World Train You First

By Camille @ 2025-07-23T17:46 (+9)

This is somewhat quickly written and is meant to contribute to the career discussion week. This criticism probably doesn't apply to all "governance tracks" advertised in EA, but as I come from a "typically EA" environment, it struck me as fair and correct. I also think communicating criticism coming from outside the community has intrinsic value.

I recently had a discussion with someone interested in AI Governance and a background in political sciences, who is not part of the EA community. They made several observations that I think are broadly correct. I’d reconstruct them as such : 

1-The entrepreneur mindset is considerably too strong among EAs interested in AI governance. Too many promising profiles are encouraged to create/join a new organisation or to do “grassroot” activity, most often supported by grants. The theory and knowledge provided is too “EA-specific”. 

This is not a robust pathway for this kind of work, aiming for a stable job in a non-EA organisation is considerably more promising for upskilling. When looking at success stories in AI Governance, most people worked at least a year outside of EA orgs. The most interesting remarks often come from people who know the reality of mundane governance.

Personal remark: I’ll add that my skillbuilding seem to me proportional to the non-EA experience of my coworkers. All the best insights I have come from ‘boring’ industry practices. When we have already reaped all the low hanging fruit in EA thinking, it’s important to look outside for better advice.

2-”Normal institutions” should receive more advertising. People learn amounts on Center for AI Safety, GovAI, ControlAI, PauseAI, and many other EA-related orgs, but barely hear about the OECD or (in the case of Europe) the wider EU ecosystem. This may be due to the fact that too many EAs bet on short timelines. In contrast, one could join the short-term mundane risk mitigation sector, in order to gain experience, organically raise awareness and legitimate existential safety, or to hedge against long timelines. 

But even in case of short timelines, having allies in peripheral sectors may prove resilient to unexpected scenarios. AI Safety requires strategic thinking : when playing Diplomacy, you don’t bet everything on one outcome. Current fieldbuilding bets too much on the grassroots-to-global-moratorium theory of change. “Alternative approaches” (such as AI Governance through markets‘New Frontier’ approaches, or securitization) look more promising, but are neglected.

This post is a result of a conversation held with Federico Cafarella during ML4Good Governance.