Against AI As An Existential Risk

By Daniel Birnbaum @ 2024-07-30T19:24 (+6)

This is a linkpost to https://irrationalitycommunity.substack.com/p/against-ai-as-an-extinction-threat

I wrote a post to my Substack attempting to compile all of the best arguments against AI as an existential threat. 

Some arguments that I discuss include: international game theory dynamics, reference class problems, knightian uncertainty, superforecaster and domain expert disagreement, the issue with long-winded arguments, and more! 

Please tell me why I'm wrong, and if you like the article, subscribe and share it with friends! 


Caleb_Maresca @ 2024-07-30T20:54 (+2)

Why would Knightian uncertainty be an argument against AI as an existential risk? If anything, our deep uncertainty about the possible outcomes of AI should lead us to be even more careful.

harfe @ 2024-07-30T20:03 (+2)

The section "International Game Theory" does not seem to me like an argument against AI as an existential risk.

If the USA and China decide to have a non-cooperative AI race, my sense is that this would increase existential risk rather than reduce it.

Daniel Birnbaum @ 2024-07-30T20:40 (+1)

Yep, I think this is true. The point is that, given AI stays aligned which is stated there, the best thing for a country to do would be to accelerate capabilities. You’re right, however, that its not an argument against AI being an existential threat (I’ll make a note to make this more clear) — it’s more a point for acceleration.