"Long" timelines to advanced AI have gotten crazy short
By Matrice Jacobine @ 2025-04-03T22:46 (+16)
This is a linkpost to https://helentoner.substack.com/p/long-timelines-to-advanced-ai-have
First post of @Helen Toner (of OpenAI board crisis fame)'s new Substack
It used to be a bold claim, requiring strong evidence, to argue that we might see anything like human-level AI any time in the first half of the 21st century. This 2016 post, for instance, spends 8,500 words justifying the claim that there is a greater than 10% chance of advanced AI being developed by 2036.
(Arguments about timelines typically refer to “timelines to AGI,” but throughout this post I’ll mostly refer to “advanced AI” or “human-level AI” rather than “AGI.” In my view, “AGI” as a term of art tends to confuse more than it clarifies, since different experts use it in such different ways.1 So the fact that “human-level AI” sounds vaguer than “AGI” is a feature, not a bug—it naturally invites reactions of “human-level at what?” and “how are we measuring that?” and “is this even a meaningful bar?” and so on, which I think are totally appropriate questions as long as they’re not used to deny the overall trend towards smarter and more capable systems.)
Back in the dark days before ChatGPT, proponents of “short timelines” argued there was a real chance that extremely advanced AI systems would be developed within our lifetimes—perhaps as soon as within 10 or 20 years. If so, the argument continued, then we should obviously start preparing—investing in AI safety research, building international consensus around what kinds of AI systems are too dangerous to build or deploy, beefing up the security of companies developing the most advanced systems so adversaries couldn’t steal them, and so on. These preparations could take years or decades, the argument went, so we should get to work right away.
Opponents with “long timelines” would counter that, in fact, there was no evidence that AI was going to get very advanced any time soon (say, any time in the next 30 years).2 We should thus ignore any concerns associated with advanced AI and focus instead on the here-and-now problems associated with much less sophisticated systems, such as bias, surveillance, and poor labor conditions. Depending on the disposition of the speaker, problems from AGI might be banished forever as “science fiction” or simply relegated to the later bucket.
Whoever you think was right, for the purposes of this post I want to point out that this debate made sense. “This enormously consequential technology might be built within a couple of decades, we’d better prepare,” vs. “No it won’t, so that would be a waste of time” is a perfectly sensible set of opposing positions.
titotal @ 2025-04-04T08:08 (+7)
I feel like this should be caveated with a "long timelines have gotten short... within people the author knows about in tech circles".
I mean, just two months ago someone asked a room full of cutting edge computational physicists whether their job could be replaced by an AI soon, and the response was audible laughter and a reply of "not in our lifetimes".
On one side you could say that this discrepancy is because the computational physicists aren't as familiar with state of the art genAI, but on the flipside, you could point out that tech circles aren't familiar with state of the art physics, and are seriously underestimating the scale of task ahead of them.