Frozen skills aren't general intelligence

By Yarrow Bouchard 🔸 @ 2025-11-08T23:27 (+10)

Thesis: Artificial general intelligence (AGI) is far away because general intelligence requires the ability to learn quickly and efficiently. General intelligence is not just a large set of skills learned inefficiently. Current AI systems learn incredibly slowly and inefficiently. Scaling them up won’t fix that.

Preamble: AGI is less than 0.1% likely by 2032

My current view is that there is significantly less than a 1 in 1,000 chance of artificial general intelligence (AGI) being developed before the end of 2032. Part of the reason I think AGI within 7 years[1] is so unlikely and why my confidence is so high is that in the sort of accounts that attempt to show how we go from current AI systems to AGI in such a short time, people stipulate things that are impossible based on current knowledge of how AI works or their accounts contradict themselves (which also makes it impossible for them to actually happen). In response to objections to these accounts that attempt to point out these difficulties, there have been no good answers. This leads me to conclude that my first impression is correct that these accounts are indeed impossible.

If I stopped to really think about it, my best guess of the probability of AGI by the end of 2032 might be less than even 1 in 10,000.[2] My impulse to make these numbers more cautious and conservative (higher probability, lower confidence) comes only from a desire to herd toward the predictions of other people, but a) this is a bad practice in the first place and b) I find the epistemic practices of people who believe very near-term AGI is very likely with high confidence tend to have alarming problems (e.g. being blithely unaware of opposing viewpoints, even those held by a large majority of experts — I’m not talking about disagreeing with experts, but not even knowing that experts disagree, let alone why), which in other contexts most reasonable people would find disqualifying. That makes me think I should disregard those predictions and think about the prediction I would make if those predictions didn’t exist.

Moreover, if I change the reference class from, say, people in the Effective Altruism Forum filter bubble to, say, AI experts or superforecasters, the median year for AGI gets pushed out past 2045, so my prediction starts to look like a lot less of an outlier. But I don’t want to herd toward those forecasts, either.[3]

Humans learn much faster than AI

DeepMind’s AI agent AlphaStar was able to attain competence at StarCraft II competitive with the game’s top-tier players. This is an impressive achievement, but it required a huge amount of training relative to what a human requires to attain the same level of skill or higher. AlphaStar was unable to reinforcement learn StarCraft II from scratch — the game was too complex — so it first required a large dataset of human play (supplied to DeepMind by the game’s developer, Blizzard) to imitation learn from. After bootstrapping with imitation learning, AlphaStar did 60,000 years of reinforcement learning via self-play to reach Grandmaster level. How does this compare to how fast humans learn?

Most professional StarCraft II players are in their 20s or 30s. The age at which they first achieved professional status will also be less, on average, than their current ages. But to make my point very clear, I’ll just overestimate by a lot and assume that, on average, StarCraft II players reach professional status at age 35. I’ll also dramatically overestimate and say that, from birth until age 35, professional players have spent two-thirds of their time (16 hours a day, on average) playing StarCraft II. This comes out to 23 years of StarCraft II practice to reach professional status. Excluding the imitation learning and just accounting for the self-play, this means humans learn StarCraft II more than 2,500x faster than AlphaStar did.

The domain of StarCraft II also helps show why the speed of learning is also relevant in terms of what it means to have a skill. The strategy and tactics of StarCraft II are continually evolving. Unlike testing a skill against a frozen benchmark that never changes, opponents in StarCraft II respond to what you do and adapt.

Anecdotally, some top-tier StarCraft II players have conjectured that the reason AlphaStar’s win rate eventually plateaued at a certain point within the Grandmaster League is that there are few enough Grandmasters (only 200 per geographical region or 1,000 worldwide) that these players were able to face AlphaStar again and again and learn how to beat it.

It’s one thing for the AI to beat a professional player in a best of 5 matchup, as happened on two or three occasions. (Although the details are a bit complicated and only one of these represented fair, typical competitive play conditions.) A best of 5 matchup in one sitting favours the AI. A best of 100 matchup over a month would favour the human. AlphaStar is not continually learning and, even if it were, it learns far too slowly — more than 2,500x more slowly than a human — to keep up. Humans can learn how AlphaStar plays, exploit its weaknesses, and turn around their win rate against AlphaStar. This is a microcosm of how general intelligence works. General intelligence means learning fast.

Scaling can’t compensate for AI’s inefficiency

It is generally not disputed that AI learns far more slowly and more inefficiently than humans. No one seems to try to claim that the speed or efficiency at which AI learns is improving fast enough to make up the gap anytime soon, either. Rather, the whole argument for the high likelihood of near-term AGI relies on that efficiency disadvantage being overcome by exponentially growing amounts of training data and training compute. So what if it takes AI more than 2,500x more data or experience than humans to learn the same skills? We’ll just give the AI that 2,500x more (or whatever it is) and then we’ll be even! But that is not how it works.

First, this is physically impossible. According to calculations by the philosopher (and co-founder of effective altruism) Toby Ord, just scaling the reinforcement learning of large language models (LLMs) by as much again as it has already been scaled would require five times more electricity than the Earth currently generates in a year. It would also require the construction of 1 million data centres. What would you estimate the probability of that happening before the end of 2032 is? More than 1 in 1,000?

But keep in mind this would only achieve a modest performance gain. You might not even notice it as a user of LLMs. This is not about what it takes to get to AGI. This is about what it would take just to continue the scaling trend of reinforcement learning for LLMs. That’s a very low bar.[4]

Second, it’s technologically impossible given the current unsolved problems in fundamental AI research. For instance, AI models can’t learn from video data, at least not in anything like the way LLMs learn from text. This is an open research problem that has received considerable attention for many years. Does AGI need to be able to see? My answer is yes. Well, we currently don’t have AI models that can learn how to see to even the level of competence LLMs have with text (which is far below human-level) and it’s not for a lack of compute or data, so scaling isn’t a solution.

Sure, eventually this will be solved, but everything will be solved eventually, barring catastrophe. If you’re willing to hand-wave away sub-problems required to build AGI, you might as well hand-wave all the way, and just assume the overall problem of how to build AGI will be solved whenever you like. What year sounds interesting? Say, 2029? It has the weight of tradition behind it, so that’s a plus.[5]

Third, it’s practically impossible given the datasets we currently lack. Humans have a large number of heterogeneous skills. The amount of text available on the Internet is an unusual exception when it comes to the availability of data that can be imitation learned from, not the norm.

For instance, there are almost no recordings or transcripts (which, it should be noted, lose important information) of psychotherapy sessions, primarily due to privacy concerns. Clinical psychology professors face a difficulty in teaching their students how to practice psychotherapy because of the ethics concerns of showing a recording of some real person’s real therapy session to a classroom of students. If this scarcity of data poses challenges even for humans, how could AI systems that require three or more orders of magnitude more data to learn the same thing (or less) ever learn enough to become competent therapists? 

That’s just one example. What about high-stakes negotiations or dealmaking that happens behind closed doors, in the context of business or government? What about a factory worker using some obscure tool or piece of equipment about which the number of YouTube videos is either zero or very few? (Not that AI models can currently learn from video, anyway.) If we’re talking about AI models learning how to do everything… everything in the world… that’s a lot of data we don’t have.

Fourth, many skills require adapting to changing situations in real time, with very little data. If AI systems continue to require more than 2,500x as much data than humans to learn the same thing (or less), there will never be enough data for AI systems to attain human-level general intelligence. If the strategy or tactics of StarCraft II changes, AI systems will be left flatfooted. If the strategy or tactics of anything changes, AI systems will be left flatfooted. If anything changes significantly enough that it no longer matches what was in the training data, AI systems that generalize as poorly as current AI systems will not succeed in that domain. Arguably, nearly all human occupations — and nearly all realms of human life — involve this kind of continuous change, and require a commensurate level of adaptability. Artificial general intelligence has always been a question of generalization, not just learning a bunch of narrowly construed skills that can be tested against frozen benchmarks or a frozen world — the world isn’t frozen. 

This gets to the question of what “having a skill” really means. When we say a human has a certain skill, we implicitly mean they have the ability to adapt to change. If we say that MaNa can play StarCraft II, we mean that if he faces another professional player who suddenly tries some off-the-wall strategies or tactics never before seen in the game of StarCraft, he will be able to adapt on the fly. The element of surprise might trip him up in the first round, or the first five, but over the course of more games over more time, he will adapt and respond. He isn’t a collection of frozen weights instantiating frozen skills interacting with a frozen world, he’s a general intelligence that can generalize, evolved in a world that changes.

When we talk about about what an AI system “can do”, what “skills it has”, we are often bending the definition so what capability or skill means no longer fits the real world, everyday definition we apply to humans. We don’t think about whether, as is always true for humans, the AI has the ability to adapt on the fly, to change in response to change, to generate non-random, intelligent, reasonable, novel behaviour in response to novelty. If AI can hit a fixed target, even though all targets in the real world are always and forever moving, we say that’s good enough, and that’s equivalent to what humans do. But it isn’t. And we know this. We just have to think about it.

One of the most talked about imagined uses cases of AI is to use AI recursively for AI research. But the job of a researcher is one of the most fluid, changing, unfrozen occupations I can think of. There is no way an AI system that can’t adapt to change with only a small amount of data can do research, in the sense that a human does research.

Fifth, even in contexts where the datasets are massive and the problems or tasks aren’t changing, AI systems can’t generalize. LLMs have been trained on millions of books, likely also millions of academic papers, everything in the Common Crawl dataset, and more. GPT-4 was released 2 years and 8 months ago. Likely somewhere around 1 billion people use LLMs. Why, in all the trillions of tokens generated by LLMs, is there not one example of an LLM generating a correct and novel idea in any scientific, technical, medical, or academic field? LLMs are equipped with as close as we can get to all the written knowledge in existence. They are prompted billions of times daily. Where is the novel insight? ChatGPT is a fantastic search engine, but a miserable thinker. Maybe we shouldn’t think that if we feed an AI model some training data, it will have mastery over much more than literally exactly the data we fed it. In other words, LLMs’ generalization is incredibly weak.

Generalization is not something that seems to be improved with scaling, except maybe very meagerly.[6] If we were to somehow scale the training data and compute for LLMs by another 1 million times (which is probably impossible), it’s not clear that, even then, LLMs could generate their first novel and correct idea in physics, biology, economics, philosophy, or anything else. I reckon this is something so broke scaling ain’t gonna fix it. This is fundamental. If we think of generalization as the main measure of AGI progress, I’m not sure there’s been much AGI progress in the last ten years. Maybe a little, but not a lot.

There have been many impressive, mind-blowing results in AI, to be sure. AlphaStar and ChatGPT are both amazing. But these are systems that rely on not needing to generalize much. They rely on a superabundance of data that covers a very large state space, and the state space in which they can effectively operate extends just barely beyond that. That’s something, but it’s not general intelligence.

Conclusion

General intelligence is (or at least, requires) the ability to learn quickly from very little new data. Deep learning and deep reinforcement learning, in their current state, require huge quantities of data or experience to learn. Data efficiency has been improving over the last decade, but not nearly fast enough to make up the gap between AI and humans within the next decade. The dominant view among people who think very near-term AGI is very likely (with high confidence) is that scaling up the compute and data used to train AI models will cover either all or most of the ground between AI and humans. I gave five reasons this isn’t true:

It is always possible to hand-wave away any amount of remaining research progress that would be required to solve a problem. If I assume scientific and technological progress will continue for the next 1,000 years, then surely at some point the knowledge required to build AGI will be obtained. So, why couldn’t that knowledge be obtained soon? Well, maybe it could. Or maybe it will take much longer than 100 years. Who knows? We have no particular reason to think the knowledge will be obtained soon, and we especially have no reason to think it will be obtained suddenly, with no warning or lead up.

More practically, if this is what someone really believes, then arguably they should not have pulled forward their AGI forecast based on the last ten years of AI progress. Since almost all the energy around near-term AGI seems to be coming as a response to AI progress, and not a sudden conversion to highly abstract and hypothetical views about how suddenly AGI could be invented, I choose to focus on views that see recent AI progress as evidence for near-term AGI.

So, that amounts to arguing against the view that all or almost all or most of the fundamental knowledge to built AGI has already been obtained, and what remains is entirely or almost entirely or mostly scaling up AI models by some number of orders of magnitude that is attainable within the next decade. Scaling is running out of steam. The data is running out, supervised pre-training has been declared over or strongly deemphasized by credible experts, and training via reinforcement learning won’t scale much further. This will probably become increasingly apparent over the coming years.

I don’t know to what extent people who have a high credence in near-term AGI will take this as evidence of anything, but it seems inevitable that the valuations of AI companies will have to come crashing down. AI models’ capabilities can’t catch up to the financial expectations those valuations are based on. I think people should take that as evidence because the real world is so much better a test of AI capabilities than artificially constructed, frozen benchmarks, which are always, in some sense, designed to be easy for current AI systems.

In general, people curious about the prospects of near-term AGI should engage more with real-world applications of AI, such as LLMs in a business context or with a robotics use case like self-driving cars, since the real world is much more like the real world than benchmarks, and AGI is defined by how it will perform in the real world, not on benchmarks. Benchmarks are a bad measure of AGI progress and without benchmarks, it’s not clear what other evidence for rapid AGI progress or near-term AGI there really is.

  1. ^

    I chose the end of 2032 or around 7 years from now as a direct response to the sort of AGI timelines I’ve seen from people in effective altruism, such as the philosopher (and co-founder of effective altruism) Will MacAskill

  2. ^

    On November 28, 2025 at 3:25pm Eastern, I edited this sentence and the previous sentence on the probability of AGI by 2032 to make minor corrections. The import of the preamble section is substantially unchanged.

  3. ^

    Edited on November, 28, 2025 at 3:40am Eastern: I have a minor update on my AGI forecast. I now forecast a significantly less than 1 in 5,000 (or 0.02%) chance of AGI before the end of 2034.

  4. ^

    I haven’t really given any thought to how you’d do the math for this — obviously, it would just be a toy calculation, anyway — but I wouldn’t be surprised if you extrapolated the scaling of reinforcement learning compute forward to get to some endpoint that serves as a proxy for AGI and it turned out it would require more energy than is generated by the Sun and more minerals than are in the Earth’s crust. 

    For example, if you thought that AGI would require reinforcement learning training compute to be scaled up not just as much as it has been already, but by that much again one more time, then 1 trillion data centres would be required (more than 100 per person on Earth), and if by that much again two more times, then 1 quintillion data centres would be required (more than 100 million per person on Earth). But I suspect even this is far too optimistic. I suspect you’d start getting into the territory where you’d start counting the number of Dyson spheres required, rather than data centres.

    Combinatorial explosion never stops producing shocking results. For instance, according to one calculation, all the energy in the observable universe (and all the mass, converted to energy), if used by a computer as efficient as physics allows, would not be sufficient to have more than one in a million chance of bruteforcing a randomly generated 57-character password using numbers, letters, and symbols. Reinforcement learning is far more efficient than brute force, but the state space of the world is also astronomically larger than the possible combinations of a 57-character password. We should be careful that the idea of scaling up compute all the way to AGI doesn’t implicitly assume harnessing the energy of billions of galaxies, or something like that.

  5. ^

    My point here isn’t that we know it’s extremely unlikely the problem of how to learn from video data will be solved within the next 7 years. My point is that we have no idea when it will be solved. If people were saying that they had no idea when AGI will be created, I would have no qualms with that, and I wouldn’t have written this post. 

  6. ^

    Please don’t confuse, here, the concept of an AI model being able to do more things (or talk about more things) because it was trained on data about more things. That’s not generalization, that’s just training. Generalization is a system’s ability to think or to generate intelligent behaviour in situations that go beyond what was covered in the data it was trained on. 


SummaryBot @ 2025-11-10T14:52 (+4)

Executive summary: The author argues that artificial general intelligence (AGI) is extremely unlikely to emerge before 2032 (less than 0.1% chance), because current AI systems learn far more slowly and inefficiently than humans; scaling up data and compute cannot overcome these fundamental limits, and true general intelligence requires fast, flexible learning and generalization, not frozen skills trained on static datasets.

Key points:

  1. The author estimates less than a 1 in 1,000 probability of AGI by 2032, citing contradictions and unrealistic assumptions in near-term AGI forecasts and arguing that most proponents ignore fundamental limitations of current AI methods.
  2. Humans learn complex tasks, like StarCraft II, thousands of times faster than AI systems such as AlphaStar; this speed and adaptability, not raw skill replication, define general intelligence.
  3. Scaling AI models cannot bridge this gap: continuing current reinforcement learning trends would exceed global energy output and require impossible physical infrastructure.
  4. Fundamental research barriers — such as AI’s inability to learn effectively from video, the scarcity of key real-world datasets, and the need for fast adaptation to changing environments — make scaling insufficient.
  5. Even with vast data, large language models show weak generalization: despite billions of users and trillions of outputs, none have produced a verifiably novel scientific or technical insight.
  6. True general intelligence depends on flexible, data-efficient learning and robust generalization — abilities current AI paradigms lack — so near-term AGI expectations and related financial valuations are profoundly misplaced.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Yarrow Bouchard 🔸 @ 2025-11-10T15:44 (+2)

This summary seems mostly correct and maybe I'd give it like a B or B+. You can read this and decide whether you want to dig into the whole post. 

It's interesting to notice the details that SummaryBot gets wrong — there aren't "billions" of LLM users (and I didn't say there were). 

SummaryBot also sort of improvises the objection about "static datasets", which is not something I explicitly raised. My approach in the post was actually just to say, okay, let's assume AI systems could continually learn from new data or experience coming in in real time. In that case, their data efficiency would be far too low and their generalization would be far too poor to make them actually competent (in the way humans are competent) at most of the tasks or occupations that humans do that we might want to automate or might want to test AI's capabilities against. It's kind of funny that SummaryBot gets its hand on the ball and adds its own ideas to the mix. 

David Mathers🔸 @ 2025-11-13T09:44 (+2)

For what it's worth, I think "less than 0.1% likely by 2032" is PROBABLY also not in line with expert opinion. The Forecasting Research Institute, where I currently work has just published a survey of AI experts and superforecasters on the future of AI, as part of our project LEAP, the Lognitudinal Expert Panel on AI. In it, experts and supers median estimate was a 23% chance of the survey's "rapid scenario" for AI progress by 2030 would occur. Here's how the survey  described the rapid scenario:

"By the end of 2030, in the rapid-progress world, AI systems are capable of competing with the best human minds and workers, and can surpass them.

Human creativity and leadership remain valued, but mostly for setting high-level vision—day-to-day execution can be left to silicon-based systems. Autonomous researchers can collapse years-long research timelines into days, weeks, or months, creating game-changing technologies, such as materials that revolutionize energy storage, or bespoke cancer cures. No human freelance software engineer can outperform AI. The same goes for customer service (e.g., call center and support chat), paralegal, and administrative workers (e.g., bookkeepers or scheduling assistants).

Indeed, models have become so capable that AI can create an album of the same caliber as the Grammy Album of the Year. Additionally, a single AI agent can generate a Pulitzer- (or Booker Prize-) caliber novel according to current (2025) standards, adapt the book into an engaging two-hour movie, negotiate the resulting book and movie contracts, and launch the marketing campaigns for both while its sibling agents manage the book publishing company and movie studio at the level of highly competent CEOs.

Not only do Level-5 robo-taxis exist, but they are, on average, 99.9% safer than human-piloted cars and can venture anywhere off-road that a competent human driver can. Meanwhile, robots can navigate an arbitrary home anywhere in the world, make a cup of the most popular local hot beverage, clean and put away the dishes according to the local custom, fix any plumbing issues that arise while they’re doing the dishes--and they can do it all faster and more reliably than most humans and without human guidance. Robots in advanced factories can autonomously perform the full range of tasks requiring the highest levels of dexterity, coordination, and adaptive decision-making."

I don't think that necessarily amounts to "AGI", because, for example, what about robotics, maybe the AIs still can't replace manual labour, for software as well as hardware related reasons. But I do think it's fair to infer that if the survey-takes thought there was a 23% chance of this scenario by 2030, it's pretty unlikely they don't think AGI is >0.1% likely by 2032. 

Yarrow Bouchard 🔸 @ 2025-11-13T09:55 (+2)

Very interested to look at the survey. Can you link to it? 

I think there’s no chance of the rapid scenario, as in, much less than a 1 in 10,000 chance. I think an outlandish scenario like we find out JFK is actually still alive is more likely than that. Simply put, that will not happen. (99%+ confidence.)

David Mathers🔸 @ 2025-11-13T10:10 (+2)

There is an ambiguity about "capabilities" versus deployment here to be fair. Your "that will not happen" seems somewhat more reasonable to me if we are requiring that the AIs are actually deployed and doing all this stuff versus merely that models capable of doing this stuff have been created. I think it was the latter we were forecasting, but I'm not 100% certain. 

Yarrow Bouchard 🔸 @ 2025-11-13T10:12 (+2)

I think including widespread deployment in the rapid scenario makes it modestly more unlikely but not radically so. The fundamental issue is that these capabilities cannot be developed within 5 years. Is there a small chance? Yes, sure, but a very small one.

David Mathers🔸 @ 2025-11-13T10:03 (+2)

https://leap.forecastingresearch.org/  The stuff is all here somewhere, though it's a bit difficult to find all the pieces quickly and easily. 

For what it's worth, I think the chance of the rapid scenario is considerably less than 23%, but a lot more than under 0.1%. I can't remember the number I gave when I did the survey as a superforecaster, but maybe 2-3%? But I do think chances are getting rather higher by 2040, and it's good we are preparing now. 

 ". I think an outlandish scenario like we find out JFK is actually still alive is more likely than that"

If you really mean this literally, I think it is extremely obviously false, in a way that I don't think merely 0.1% is. 

Yarrow Bouchard 🔸 @ 2025-11-13T10:08 (+2)

No, I mean it literally. I literally think something as bizarre as it turning out somehow JFK has really been alive all this time and his assasination was a hoax is more likely than the rapid scenario. I don’t think that’s obviously false. I think it’s obviously correct.

For instance, Toby Ord has calculated it’s physically impossible to continue the scaling trend of RL training for LLMs. Bizarre and outlandish things are more likely than physically impossible things. That’s not all there is to say about the subject, but it’s a good start.

David Mathers🔸 @ 2025-11-13T10:20 (+2)

That's pretty incomprehensible to me even as a considerable skeptic of the rapid scenario. Firstly, you have experts giving a 23% chance and it's not moving you up even to like over 1 in a 100,000, let's say, although probably the JFK scenario is a hell of a lot less likely than that, even if his assassination was faked, despite there literally being a huge crowd who saw his head get blown off in public, still he have to be 108 to still be alive. Secondly, in 2018, AI could do to a first approximation basically nothing outside of highly specialized uses like chess computers, that did not use current ML techniques. Meanwhile, this year, I, a philosophy PhD, asked Claude about an idea that I had seriously thought about turning into a paper one day back when I was still in philosophy, and it came up with a very clever objection that I had not thought of myself. I am fairly, even if not 100% sure that this objection is not in the literature anywhere. Given that we've gone from nothing to "high quality philosophical arguments at times" in like 7 years, and there are some moderately decent reasons for thinking models good at AI research tasks could set off a positive feedback loop, and far more money and effort is being thrown at AI than ever before, it seems hard to me to think it is 99,999 in 100,000 sure that we won't get AGI by 2030, even though the distance to cross is still very large, and current success on benchmarks somewhat misleading. 

Yarrow Bouchard 🔸 @ 2025-11-13T10:59 (+2)

I didn’t update my views on the survey because I haven’t seen the survey. I did ask for the survey so I could see it. I haven’t seen it yet. I couldn’t find it on the website. I might change my mind after I see it. Who knows. 

I agree the JFK scenario is extremely outlandish and would basically be impossible. I just think the rapid scenario is more outlandish and would also basically be impossible. 

Everything you said about AI I just don’t think is true at all. LLMs are just another narrow AI, similar to AlphaGo, AlphaStar, AlphaFold, and so on, and not a fundamental improvement in generality that gets us closer to AGI. You shouldn’t have updated your AGI timelines based on LLMs. That’s just a mistake. Whatever you thought in 2018 about the probability of the rapid scenario, you should think the same now, or actually even less because more time has elapsed and the necessary breakthroughs have still not been made. So, what was your probability for the rapid scenario in 2018? And what would your probability have been if someone told you to imagine there would be very little progress toward the rapid scenario between 2018 and 2025? That’s what I think your probability should be. 

To say that AI’s capabilities were basically nothing in 2018 is ahistorical. The baseline from which you are measuring progress is not correct, so that will lead you to overestimate progress.

I also get the impression you greatly overestimate Claude’s capabilities relative to the cognitive challenges of generating the behaviours described in the rapid scenario.

AI being able to do AI research doesn’t affect the timeline. Here’s why. AI doing AI research requires fundamental advancements in AI to a degree that would make something akin to AGI or something akin to the rapid scenario happen anyway. So, whether AI does AI research can’t accelerate the point at which we reach the rapid scenario. There are no credible arguments to the contrary.

The vast majority of benchmarks are not just somewhat misleading if seen as evidence about AGI progress. They are almost completely meaningless in terms of AGI progress, with perhaps the sole exception of the ARC-AGI benchmarks. Text Q&A benchmarks are about as meaningful an indication of AGI progress as AI’s ability to play go or StarCraft.

There is also the physical impossibility problem. If continuing scaling trends is literally physically impossible, then how can the probability of the rapid scenario be more than 1 in 10,000? (By the way, I said less than 1 in 10,000, not less than 1 in 100,000, although I’m not sure it really matters.)

Someone should try to do the math on what it would take to scale RL training compute for LLMs to some level that could be considered a proxy for AGI or the sort of AI system that could make the rapid scenario possible. You will likely get some really absurd result. For example, I wouldn’t be surprised if the result was that the energy required would mean we’d have to consume the entire Sun, or multiple stars, or multiple galaxies. In which case, the speed of light would render the rapid scenario impossible.

Combinatorial explosion is just that crazy. There isn’t enough energy in the entire universe to brute force a 60-character password. RL is not the same thing as trying random combinations for a password, but there is an element of that in RL, and the state space of real world environments from the perspective of an AI agent is much, much larger than the possible combinations of a 60-character password. 

David Mathers🔸 @ 2025-11-13T14:34 (+6)

Ok, there's a lot here, and I'm not sure I can respond to all of it, but I will respond to some of it. 

-I think you should be moved just by my telling you about the survey. Unless you are super confident either that I am lying/mistaken about it, or that the FRI was totally incompetent in assembling an expert panel, the mere fact that I'm telling you the median expert credence in the rapid scenario is 23% in the survey ought to make you think there is at least a pretty decent chance that you are giving it several orders of magnitude less credence than the median expert/superforecaster. You should already be updating on there being a decent chance that is true, even if you don't know for sure. Unless you already believed there was a decent chance you were that far out of step with expert opinion, but I think that just means you were already probably doing the wrong thing in assigning ultra-low credence. I say "probably" because the epistemology of disagreement IS very complicated, and maybe sometimes it's ok to stick to your guns in the face of expert consensus. 

-"Physical impossibility". Well, it's not literally true that you can't scale any further at all. That's why they are building all those data centers for eyewatering sums of money. Of course, they will hit limits eventually and perhaps soon-probably monetary before physical.  But you admit yourself that no one has actually calculated how much compute is needed to reach AGI. And indeed, that is very hard to do. Actually Epoch, who are far from believers in the rapid scenario as far as I can tell think quite a lot of recent progress has come from algorithmic improvements, not scaling: https://blog.redwoodresearch.org/p/whats-going-on-with-ai-progress-and  Text search for "Algorithmic improvement" or "Epoch reports that we see". So progress could continue to some degree even if we did hit limits on scaling. As far as I can tell, most of the people who do believe in the rapid scenario actually expect scaling of training compute to at least slow down a lot relatively soon, even though the expect big increases in the near future. Of course, none of this proves that we can reach AGI with current techniques just by scaling, and I am pretty dubious of that for any realistic amount of scaling. But I don't think you should be talking like the opposite has been proven. We don't know how much compute is needed for AGI with the techniques of today or the techniques available by 2029, so we don't know whether the needed amount of compute would breach physical or financial or any other limits. 

-LLM "Narrowness" and 2018 baseline: Well, I was probably a bit inexact about the baseline here. I guess what I meant was something like this. Before 2018ish, as a non-technical person, I never really heard anything about exciting AI stuff, even though I paid attention to EA a lot, and people in EA already cared a lot about AI safety and saw it as a top cause area. Since then, there has been loads of attention, literal founding fathers of the field like Hinton say there is something big going on, I find LLMs useful for work, there have been relatively hard to fake achievements like doing decently well on the Math Olympiad, and College students can now use AI to cheat on their essays, a task that absolutely would have been considered to involve "real intelligence" before Chat-GPT. More generally, I remember a time, as someone who learnt a bit of cognitive science while studying philosophy, when  the problem with AI was essentially being presented as "but we just can't hardcode all our knowledge in, and on the other hand, its not clear neural nets can really learn natural languages". Basically AI was seen as something that struggled with anything that involved holistic judgment based on pattern-matching and heuristics, rather than hard-coded rules. That problem now seems somewhat solved: We now seem to be able to get AIs to learn how to use natural language correctly, or play games like Go that can't be brute forced by exact calculation, but rely on pattern-recognition and "intuition". These AIs might not be general, but the techniques for getting them to learn these things might be a big part of how you build an AI that actually is, since the seem to be applicable to large variety of kinds of data: image recognition, natural language, code, Go and many other games, information about proteins.  The techniques for learning seem more general than many of the systems. That seems like relatively impressive progress for a short time to me as a layperson. I don't particularly think that should move anyone else that much, but it explains why it is not completely obvious to me, why we could not reach AGI by 2030 at current rates of progress. And again, I will emphasize, I think this is very unlikely. Probably my median is that real AGI is 25 years away. I just don't think it is 1 in a million "very unlikely". 

I want to emphasize here though, that I don't really think anything under the 3rd dash here should change your mind. That's more just an explanation of where I am coming from, and I don't think it should persuade anyone of anything really. But I definitely do think the stuff about expert opinion should make you tone down your extremely extreme confidence, even if just a bit. 

I'd also say that I think you are not really helping your own cause here by expressing such an incredibly super-high level of certainty, and making some sweeping claims that you can't really back up, like that we know right now that physical limits have a strong bearing on whether AGI will arrive soon. I usually upvote the stuff you post here about AGI, because I genuinely think you raise good, tough questions, for the many people around here with short timelines. (Plenty of those people probably have thought-through answers to those questions, but plenty probably don't and are just following what they see as EA consensus.)  But I think you also have a tendency to overconfidence that makes it easier for people to just ignore what you say. This come out in you doing annoying things you don't really need to do, like moving quickly in some posts from "scaling won't reach AGI" to "AI boom is a bubble that will unravel" without much supporting argument, when obviously, AI models could make vast revenues without being full AGI. It gives the impression of someone who is reasoning in a somewhat motivated manner, even as they also have thought about the topic a lot and have real insights. 

Yarrow Bouchard 🔸 @ 2025-11-13T18:15 (+2)

I think your suspicion toward my epistemic practices is based simply on the fact that you disagree very strongly, you don’t understand my views or arguments very deeply, you don’t know my background or history, and you’re mentalizing incorrectly.

[Edited on 2025-11-18 at 05:10 UTC to add fancy formatting.]

AI bubble

For example, I have a detailed collection of thoughts about why I think AI investment is most likely in a bubble, but I haven’t posted about that in much detail on the EA Forum yet — maybe I will, or maybe it’s not particularly central to these debates or on-topic for the forum. I’m not sure to what extent an AI bubble popping would even change the minds of people in EA about the prospects of near-term AGI. How relevant is it?

I asked on here about to what extent the AI bubble popping would change people’s views on near-term AGI and the only answer I got is that it wouldn’t move the needle. So, I’m not sure if that’s where the argument needs to go. Just because I briefly mention this topic in passing doesn’t mean my full thoughts about the topic are really only that brief. It is hard to talk about these things and treat every topic mentioned, even off-handedly, in full detail without writing the whole Encyclopedia Britannica.

Also, I am much, much less sure about the AI bubble conclusion than I am about AGI or about the rapid scenario. It is extremely, trivially obvious that sub-AGI/pre-AGI/non-AGI systems could potentially generate a huge amount of profit and justify huge company valuations, and indeed I’ve written something like 100 articles about that topic over the last 8 years. I used to have a whole blog/newsletter solely about that topic and I made a very small amount of money doing freelance writing primarily about the financial prospects of AI. I actually find it a little insulting that you would think I have never considered that AI could be a big financial opportunity without AGI coming to fruition in the near term.

[Edited on 2025-11-16 at 01:05 UTC to add: I ended up covering the bubble topic here.]


LLM scaling

Here is Toby Ord on the physical limits to scaling RL training compute for LLMs:

Grok 4 was trained on 200,000 GPUs located in xAI’s vast Colossus datacenter. To achieve the equivalent of a GPT-level jump through RL would (according to the rough scaling relationships above) require 1,000,000x the total training compute. To put that in perspective, it would require replacing every GPU in their datacenter with 5 entirely new datacenters of the same size, then using 5 years worth of the entire world’s electricity production to train the model. So it looks infeasible for further scaling of RL-training compute to give even a single GPT-level boost.

This is not what it would take to get to AGI, it’s what it would take to get from Grok 4 to Grok 5 (assuming the scaling trend were to continue as it did from Grok 3 to Grok 4).

I am willing to say that, if Toby’s calculation is correct, it is very close to an absolute certainty that this level of scaling of RL training compute for LLMs — using 5x the world’s current annual electricity supply and 1 million datacentres — will not happen before the end of 2030.

My comments about extrapolating scaling to AGI potentially requiring galaxies is not really the actual main point I’m trying to make about scaling, it’s just to emphasize the problem with runaway exponential growth of this kind and the error in extrapolating its long-term continuation. This is for emphasis and illustration, not a strongly held view.

A number of prominent experts like OpenAI’s former chief scientist Ilya Sutskever have said self-supervised pre-training of LLMs has run out of steam or reached a plateau. Anthropic’s CEO Dario Amodei said that Anthropic’s focus has shifted from pre-training to RL training. So, at this point we are relying quite a lot on scaling up RL training for LLMs to get better as a result of training. (More discussion here.) Inference compute can also be scaled up and that’s great, but you have to pay the inference cost every query and can’t amortize it across billions or trillions of queries like you can with the training cost. Plus, you run into a similar problem where once you scale up inference compute 100x and 100x again after that, the next 100x and the next 100x starts to become unwieldy.


Fundamental questions

On the philosophy of mind and cognitive science topics, I have been spiritually enthralled with these topics since I was a teenager in the 2000s and 2010s (when I first read books like Daniel Dennett’s Consciousness Explained and Douglas Hofstadter’s I Am A Strange Loop, and incidentally was also interested in people talking about AGI like Ray Kurzweil and Nick Bostrom, and even was reading/watching/listening to some of Eliezer Yudkowsky‘s stuff back then), for a long time wanted to devote my life to studying them, and actually I still would like to do that if I could somehow figure out a practical path for that in life. As a philosophy undergrad, I published an essay on the computational theory of mind in an undergrad journal and, unfortunately, that’s the closest I’ve come to making any kind of contribution to the field.

I’ve been following AI closely for a long time and I can imagine how you might have a distorted view of things if you see generative AI as having come basically out of nowhere. I started paying attention to deep learning and deep reinforcement learning around the time DeepMind showed its results with deep RL and Atari games. I really ramped up how much I started paying attention in 2017 when I started to think really seriously about self-driving cars. So, LLMs were quite a surprise for me, just as they were for many people, but they didn’t drop out of the clear blue sky. I already had an intellectual context to put them in.

If you want to read some more substantive objections to the rapid scenario, there are some in the post above. I’m not sure if you read those or just focused on the forecasting part in the preamble. The rapid scenario depends on a number of fundamental improvements in AI, including (but not limited to) vastly improved generalization, vastly improved data efficiency, the ability to learn effectively and efficiently from video data, and continual learning. These are not challenges that can be solved through scaling, full stop. And the rapid scenario cannot happen without solving them, full stop. There is more to say, but that’s a good start.


Expert survey

On the survey, I might update on the survey once I see it, but I need to see it first. I’d love to see it and I’ll keep an open mind when I do. There are some entirely epistemically legitimate reasons for me not to update on evidence I can’t see or confirm, especially when that’s evidence only about other people’s views and not direct evidence, and especially especially when whether it’s actually even new information to me about other people’s views depends on the details — such as who was surveyed — which I can’t see and don’t know.

There are strong concerns related to the concept of information cascades where, e.g., if you just survey the same people over and over and repackage it as new evidence, that would lead you to keep revising your credences upwards (or downwards) with no limit based on the same piece of evidence, or, e.g., people will circularly update their views — I tell you my view, you update based on that, you tell me your updated view (changed only because of me telling you my view), I update based on that, and so on and so forth, until we end up a significant way from where we started for no good reason. In case you think this is a silly hypothetical, I read a post or comment somewhere (I could dig it up) where someone who had been involved in the Bay Area rationalist community said they think this kind of circular updating actually happened. The imagery they gave was people sitting in a circle telling each other smaller and smaller numbers for the median date of AGI.

[Edited on 2025-11-14 at 07:52 UTC to add: I was able to find the report. It was easy to find. I was just looking on the wrong webpage before. The discussion of the question about the "rapid progress" scenario on page 104 and the responses on page 141 is confusing. Respondents are asked, "At the end of 2030, what percent of LEAP panelists will choose “slow progress,” “moderate progress,” or “rapid progress” as best matching the general level of AI progress?" I find that a really strange and counterintuitive way to frame the question. How is this a proxy for probability that the scenario will occur? The framing of the question is highly ambiguous and the answers are highly ambiguous and hard to interpret.

Three rationale examples are given for the rapid progress scenario and all three contradict the rapid progress scenario. How are the rationale examples selected? Was there not one example of a respondent who actually thought the rapid progress scenario would occur? I don't understand this.

This is precisely why I don't update on evidence before seeing it. The devil is in the details.

The rationale examples are useful and I'm glad they are included. They show problems both with the design of the survey and with the reasoning and evidence used by some of the respondents to come to conclusions about near-term AI progress, e.g., the famous METR time horizon graph is erroneously interpreted in a way that overlooks the crucial caveats, some of which even METR itself highlights. Instead of only measuring what METR measures, researchers should also measure something like performance on a diverse, heterogeneous array of manually graded real world or realistic tasks with the same success rate as humans. The result would be entirely different, i.e., approximately a flat line at zero rather than the appearance of an exponential trend.

I'll also add asking respondents to choose only between the slow progress, moderate progress, and rapid progress scenarios is really poor survey design. All three scenarios arguably include proxies for or operationalizations of AGI, and respondents are not given the option to say no to all of them. Even the slow progress scenario says that AI "can automate basic research tasks, generate mediocre creative content, assist in vacation planning, and conduct relatively standard tasks that are currently (2025) performed by humans in homes and factories." AI can also "rarely produce novel and feasible solutions to difficult problems." And AI can "can handle roughly half of all freelance software-engineering jobs that would take an experienced human approximately 8 hours to complete in 2025", write "full-length novels", "make a 3-minute song that humans would blindly judge to be of equal quality to a song released by a current (2025) major record label", and largely substitute for "a competent human assistant".

So, respondents were given a choice between AGI, AGI, and AGI, and chose AGI. This is not a useful survey! You are not giving the respondents a chance to say no! You are baking in the result into the question!

Another serious problem with the survey is the percentage of respondents affiliated with effective altruism. On page 20, the report says 28% of respondents were affiliated with effective altruism and that was reweighted down to 12%. This is exactly the problem with information cascades and circular updating that I anticipated. I don't need a new survey of people affiliated with effective altruism to tell me what people affiliated with effective altruism believe about AI. I already know that.

Another significant problem is that only around 45% of the experts have technical expertise in AI. But now I'm just piling on. 

You absolutely should not have told me to update on this survey before actually looking at it.]

[Edited on 2025-11-18 at 04:57 UTC to add: I made a post about the Forecasting Research Institute report, specifically about the content of the slow progress scenario and the framing of that question.]

Unfortunately, this is a debate where forecasts can’t be practically falsified or settled. If January 2031 rolls out and AI has still only made modest, incremental progress relative to today, the evidence is still open to interpretation as to whether a 97-98% chance the rapid scenario wouldn’t happen was more reasonable or a 99.99%+ chance it wouldn’t happen. We can’t agree on how to interpret similar evidence today. I have no reason to think it would be any easier to come to an agreement on that in January 2031.

It is an interesting question, as you say, how to update our views based on the views of other people — whether, when, why, and by how much. I was surprised to recently see a survey where around 20% of philosophers accept or lean toward a supernatural explanation of consciousness. I guess it’s possible to live in a bubble where you can miss that lots of people think so differently than you. I would personally say that the chances that consciousness is a supernatural phenomenon are less than the rapid scenario. And that survey didn't make me revise up my credence in the supernatural at all. (What about you?)

I will say that the rapid scenario is akin to a supernatural miracle in the radical discontinuity to our sense of reality it implies. It is more or less the view that we will invent God — or many gods — before the 2032 Summer Olympics in Brisbane. You should not be so quick to chide someone for saying this is less than 0.01% likely to happen. Outside of the EA/rationalist/Bay Area tech industry bubble, this kind of thing is widely seen as completely insane and ludicrous.

In my interpretation, the rapid scenario is an even higher bar than "just" inventing AGI, it implies superhuman AGI. So, for instance, a whole brain emulation wouldn't qualify. An AGI that is "merely" human-level wouldn't qualify. I can't make a Grammy-calibre album, write a Pulitzer or Booker Prize-calibre book, or make a Hollywood movie, nor can I run a company or do scientific research, and I am a general intelligence. The rapid scenario implies superhuman AGI or superintelligence, so it's less likely than "just" AGI.


Meta discussion

Please forgive me for how long this comment is, but I suddenly felt the need to say... the following...

I'm starting to get the temptation to ask you questions like, "What probability would you put on the core metaphysical and cosmological beliefs of each of the major world religions turning out to be correct?" which is a sign this conversation is getting overly meta, overly abstract, and veering into "How many angels can dance on the head of a pin?" territory. I actually am fascinated with epistemology and want to explore some of these questions more (but in another context than this comment thread that would be more appropriate). I am a bit interested in forecasting, but not fascinated, and kind of would like to understand it better (I don't understand it very well currently). I would particularly like to understand forecasting better as it pertains to the threshold or demarcation between topics it is rigorous to forecast about, for which there is evidence of efficacy, such as elections or near-term geopolitical events, and topics for which forecasting is unrigorous, not supported by scientific evidence, and probably inappropriate, such as, "What is the probability of the core tenets of Hindu theology such as the identity of Atman and Brahman being correct?"

My personal contention is that actually a huge problem with the EA Forum (and also LessWrong, to an even worse extent) is how much time, energy, and attention gets sucked into these highly abstract meta debates. To me, it's like debating about whether you should update your probability of whether there's peanut butter in the cupboard based on my stated probability of whether there's peanut butter in the cupboard, when we could just look in the cupboard. The abstract content of that debate is actually pretty damn interesting, and I would like to take an online course on that or something, but that's the indulgent attitude of a philosophy student and not what I think practically matters here. I simply want more people to engage substantively with the object-level points I'm making, e.g. about learning from video data, generalization, data efficiency, scaling limits, and so on. That's "looking in the cupboard". I could be wrong about everything. I could be making basic mistakes. I don't know. What can I do except try to have the conversation?

By the way, when I give my probabilities for something, I am just trying to faithfully and honestly report, as a number, what I think my subjective or qualitative sense of the likelihood of something implies. I am not necessarily making an argument that anyone else should have that same probability. I just want them to talk to me about the object-level issues. The probabilities are a side thing that I need to get out of the way to talk about that. I don't intend me reporting my best guess at how my intuitions translate into numbers as an insult against anyone. This passes the reversibility test for me: if someone says they think their probability for something is 1,000x higher or lower than mine, I don't interpret that as an insult.

So, I don't think is it impolite for me to express the numbers that are my best guess. I do kind of accept that I will be less persuasive if I say a number that seems too extreme, which is why I've been kind of softballing/sandbagging what I say about this. Also, I think if someone says "significantly less than 1%" or even just "less than 1%" or "1%", that's enough to motivate the discussion of object-level topics and to move on from the guessing probabilities portion of the conversation. So, it's kind of irrelevant whether I say less than 1 in 1,000, less than 1 in 10,000, less than than 1 in 100,000, or less than 1 in 1 million. Yes, I get that these are very different probabilities (each one an order of magnitude lower than the last!), but from the perspective of just hurrying along to the object-level discussion, it doesn't really make a difference.

I almost would be willing to accept I should sandbag my actual probability even more for the sake of diplomacy and persuasion, and just say "less than 1%" or something like that. But that seems a little bit morally corrupt — maybe "morally corrupt" is too strong, but, hey, I'd rather just be transparent and honest rather than water things down to be more persuasive to people who are very far away from me on this topic. (The question of how integrate considerations about both diplomacy and frankness into one's communications is another fascinating topic, but also another diversion away from the object-level issues pertaining to the prospects of near-term AGI.)

Some people in this community sometimes like to pretend they don't have feelings and are just calculating machines running through numbers, but the emotion is betrayed anyway. The undercurrent of this conversation is that some people take offense at my views or find them irritating, and I have an incentive to placate them if I want to engage them in conversation. I accept that that is true. I am no diplomat or mediator, and I don't feel particularly competent at persuasion.

My honest motivation for engaging in these debates is mostly sheer boredom, curiosity, and a desire for intellectual enrichment and activity. Yeah, yeah, there is some plausible social benefit or moral reason to do this by course correcting effective altruism, but I'm kind of 50/50 on my p(doom) for effective altruism anyway, and I think the chances are slim that I'm going to make a dent in that. So, if it were just a grinding chore, I wouldn't do it.

Anyway, all this is to say: please just talk to me about the object-level issues, and try to keep the rest of it (e.g. getting into the weeds of forecasting, open questions in epistemology that philosophers and other experts don't agree on, abstract meta debates) low, and only bring it up when it's really, really important. (Not just you, personally, David, this is my general request.) I'm dying to talk about the object-level issues, and somehow I keep ending up talking about this meta stuff. (I am easily distractible and will talk forever about all sorts of topics, even topics that don't matter and don't relate to the issue I originally wanted to talk about, so it's my fault too.)