AI safety remains underfunded by more than 3 OOMs

By Impatient_Longtermist 🔸🌱 @ 2025-10-06T19:53 (+24)

This is a linkpost to https://www.nber.org/papers/w33602#:~:text=In%20the%20model%2C%20for%20most,the%20welfare%20of%20future%20generations.

This is a link post for Charles I. Jone’s paper: How Much Should We Spend to Reduce A.I.'s Existential Risk? 

Two caveats- I am not Professor Jones, and I take no credit for the linked work. Also this post was written with  the help of AI.


Summary of the paper

Stanford economist Charles I. Jones uses standard cost-benefit analysis to estimate how much the US should spend on AI safety. His conclusion: between 1-8% of GDP annually, or roughly $290 billion to $1.5 trillion per year for the US alone.

The core logic is straightforward. US policymakers value a statistical life at around $10 million. To avoid a 1% mortality risk, this implies willingness to pay $100,000 per person. If AI poses similar or greater existential risks over the next decade (which many AI researchers believe it does), comparable investment levels are justified.


Critically, these numbers don’t require any concern for future generations. Jones explicitly models a “selfish” scenario that only values currently living people, and still finds massive spending justified. 

Why I think this paper matters  

I think this paper helps gives a perspective of how underfunded AI safety remains, despite fairly rapid growth in funding over the past decade. 

Global AI safety spending in 2024 was estimated at just  over $100m. Jones’s analysis suggests the US alone should be spending 3,000-15,000 times more on AI safety, even without taking non-US or future lives into account.


I know that the idea that existential risk reduction is underfunded is unlikely to take many EAF readers by surprise. However, I think that this paper is worth highlighting. Mainstream economics is a powerful means of both elucidation and legitimation. As J.S. Keynes said: "Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist."

Even someone as ‘galaxy brained’ and ‘AGI-pilled’ as Tyler Cowen once reportedly said: "[I] would start listening when the AI risk people published in a top economics journal showing the risk is real”. 


Midtermist12 @ 2025-10-06T21:03 (+4)

Thanks for sharing this. While I think there are strong reasons to invest heavily in AI safety, I'm concerned this particular cost-benefit framing may not be as compelling as it initially appears.

The paper uses a $10 million value of statistical life (VSL) to justify spending $100,000 per person to avoid a 1% mortality risk. However, if we're being consistent with cost-effectiveness reasoning, we should note that GiveWell-recommended charities save lives in the developing world for approximately $5,000 each—roughly 2,000 times cheaper per life saved.

By this logic, the same funding directed toward global health interventions would save orders of magnitude more lives with near-certainty, compared to reducing AI x-risk with uncertain probability.

This doesn't mean AI safety is a bad investment—there are strong arguments based on:

(Note: comment generated in collaboration with AI)

Impatient_Longtermist 🔸🌱 @ 2025-10-06T21:38 (+5)

I completely agree with your comment. However my interpretation of what Professor Jones is trying to do is slightly different from straightforward cause prioritisation in the EA sense.

I think he is trying to frame AI risk reduction in a way that is compelling to policymakers, by focusing on standard benchmark values (Value of a Statistical Life), and limiting his analysis in space (only ‘valuing’ lives of American citizens) and time (only the next 20 years). This puts the report in line with standard government Cost Benefit Analyses, which may make it more convincing for those who have access to policy levers.