What Happens If We Have Another AI Winter?
By Ben Norman @ 2025-06-27T14:11 (+5)
This is a linkpost to https://futuresonder.substack.com/p/what-happens-if-we-have-another-ai
The next ~5 years of AI development are indescribably crucial. They may mark the beginning of the most important moment in human history (or even the history of our galaxy), or we may start to see signs of AI progress slowing down.
We're approaching fundamental physical and economic constraints that will determine whether AI progress continues to accelerate or starts to decelerate. According to Epoch's analysis, training runs of around 2e29 FLOP (roughly 10,000x larger than GPT-4) are technically feasible by 2030, but beyond that, the constraints become severe. Power requirements would jump from manageable gigawatt-scale data centres to needing 40%+ of US electricity. Chip production would need to expand beyond TSMC's realistic capacity. The investment required would leap to hundreds of billions of dollars per training run.
What happens if we hit these limits before building AGI-level systems?
In a recent debate, Daniel Kokotajlo said:
“They’ve been scaling up the amount of money they’re spending on AI research — and on training runs in particular — over the last decades. But it’ll be hard for them to continue scaling at the same pace. […]
The biggest training run in 2020 was, what, like $3 million? Something like that — $5 million? So they’ve gone up by like two and a half orders of magnitude in five years.
If it’s another two and a half orders of magnitude, we’re doing a $500 billion training run in 2030. There’s just not enough money in the world — like, the tech companies just won’t be able to afford it. […]
So that’s why we predict that if you don’t get to some sort of radical transformation — if you don’t get to some sort of crazy AI-powered automation of the economy by the end of this decade — then there’s going to be a bit of an AI winter.
There’s going to be, at the very least, a sort of tapering off of the pace of progress. And then that stretches out a lot of probability mass into the future, because that’s a very different world. It could take quite a long time to get to AGI once you’re in that regime.” (emphasis mine)
Progress could also slow down due to AI stocks crashing. Benjamin Todd has already pointed out that this could have implications for AI safety. He mentions that the wealth of many donors to AI safety is pretty correlated with AI stock – which could create a perfect storm where funding decreases just as public sentiment turns against "alarmist" AI safety advocates:
Second, an AI crash could cause a shift in public sentiment. People who’ve been loudly sounding caution about AI systems could get branded as alarmists, or people who fell for another “bubble”, and look pretty dumb for a while.
Likewise, it would likely become harder to push through policy change for some years as some of the urgency would drop out of the issue.
So, considering all this, we should ask the question: Is the AI safety community adequately preparing for an AI winter?
Are we thinking enough about its strategic implications? What happens if, come 2030, we don't have AGI but instead face a significant slowdown in progress?
First, would an AI winter actually slow down AGI development meaningfully? Remember, companies like OpenAI and DeepMind weren't founded to make chatbots – they were founded to usher in AGI. A funding crunch might slow them down, but it wouldn’t change their ultimate goal. Furthermore, the most important societal actors are already situationally aware. Would an AI winter change this, or would they stick to their convictions? Could an "AI winter" just mean AGI in 2040 instead of 2030?
Second, and perhaps just as concerning: what if we get minimal extra time to work on the technical, societal, and governance-related grand challenges society will likely face from building machines smarter than us? I’ve heard people say that an AI winter would be good news for AI safety, as it would buy us more time to prepare. However, would we be able to use that time wisely given we may have reduced credibility and less leverage? The critics who already talk about the "AI doomerism cult” will have a field day. "See? They said AGI by 2030, and here we are with slightly better chatbots. Just another moral panic."
Every time an AI safety researcher's prediction failed to pan out, it may become ammunition for those who want to dismiss the entire field. And with the surge in anti-AI safety lobbying, you can bet that narrative will be amplified by well-funded interests who benefit from unrestricted AI development.
I am not saying that we should consider changing our strategy to assume an AI winter will occur. It still makes sense to prepare for AGI by 2030, and it still seems wise to operate as if timelines are very short – especially over the next ~5 years. You should still probably consider living as if you only have 10 years left.
Soon enough, we will have a vastly better idea of how the future will unfold. Until then, we should prepare for both outcomes. Let’s get to work.