My AI Vibes are Shifting

By Nathan Young @ 2025-09-05T14:45 (+19)

This is a linkpost to https://nathanpmyoung.substack.com/p/my-vibes-are-shifting-on-ai-risk

I think vibes-wise I am a bit less worried about AI than I was a couple of years ago. Perhaps (vibewise) P(doom) 5% to like 1%.[1]

Happy to discuss in the comments. I maybe very wrong. I wrote this up in about 30 minutes.

Note I still think that AI is probably a very serious issue, but one to focus on and understand rather than to necessarily push for slowing in the next 2 years. I find this very hard to predict, so am not making strong claims.

My current model has two kind of AI risk:

Perhaps civilisations almost always end up on paths they strongly don’t endorse due to AI. Perhaps AI risk is vastly overrated. That would be a consideration in the first bucket. Yudkowskian arguments feel more over here.

Perhaps we are making the situation much worse (or better) by actions in the last 5 and next 3 years. That would be the second bucket. It seems much less important that the first, unless the first is like 50/50.

Civilisational AI risk considerations and their direction (in some rough order of importance):

 

More local considerations and their direction (in some rough order of importance):

What do you think I am wrong about here? What considerations am I missing? What should I focus more attention on?

 

  1. ^

     

    I guess I am building up to some kind of more robust calculation, but this is kind of the information/provocation phase.

  2. ^

    You might argue that China seems not to want to race or put AI in charge of key processes, and I’d agree. But given we would have had the West regardless, this seems to make things less worse than they could have been, rather than better.

  3. ^

    Did FTX try? Like what was the Bahamas like in 10 years in the FTX success world?

  4. ^

    I may be double counting here but there feels like something different about the general geopolitical instability and specifically how US/China might react.


MichaelDickens @ 2025-09-05T19:12 (+10)

From reading your lists of changing risks, it's not clear to me what the takeaway should be. It's not clear why aggregating all these considerations results in P(doom) going down from 5% to 1%. I would like to hear more about that.

Jonas Hallgren 🔸 @ 2025-09-07T15:40 (+1)

Uncertain risk. AI infrastructure seems really expensive. I need to actually do the math here (and I haven’t! hence this is uncertain) but do we really expect growth on trend given the cost of this buildout in both chips and energy? Can someone really careful please look at this?

 

https://www.lesswrong.com/users/vladimir_nesov <- Got a bunch of stuff on energy calculations and similar required for AI companies, especially the 2028 post, some very good analysis of these things imo.