If an AI financial bubble popped, how much would that change your mind about near-term AGI?
By Yarrow Bouchard🔸 @ 2025-10-21T22:39 (+19)
If the people arguing that there is an AI bubble turn out to be correct and the bubble pops, to what extent would that change people's minds about near-term artificial general intelligence (AGI)?
I strongly suspect there is an AI bubble because the financial expectations around AI seem to be based on AI significantly enhancing productivity and the evidence seems to show it doesn't do that yet. This could change — and I think that's what a lot of people in the business world are thinking and hoping. But my view is a) large language models (LLMs) have fundamental weaknesses that make this unlikely and b) scaling is running out of steam.[1]
Scaling running out of steam actually means three things:
1) Each new 10x increase in compute is less practically or qualitatively valuable than previous 10x increases in compute.
2) Each new 10x increase in compute is getting harder to pull off because the amount of money involved is getting unwieldy.
3) There is an absolute ceiling to the amount of data LLMs can train on that they are probably approaching.
So, AI investment is dependent on financial expectations that are depending on LLMs enhancing productivity, which isn't happening and probably won't happen due to fundamental problems with LLMs and due to scaling becoming less valuable and less feasible. This implies an AI bubble, which implies the bubble will eventually pop.
There are also hints here and there that the companies involved may themselves have started to worry or become a bit desperate. For example, Microsoft ended its exclusive deal to provide compute to OpenAI reportedly out of fears of overbuilding data centres. Some analysts and journalists have become suspicious of what looks like circular financing or round-trip deals between companies. Part of the worry is that, to greatly simplify, if Nvidia gives OpenAI $1 and OpenAI gives Nvidia $1, both companies can put an additional $1 in revenue on their books, but this isn't organic revenue from actually meeting the demand of consumers or businesses. If these deals get too complex and entangled (especially if some of them aren't even known to investors), it might become hard to assess what's the sort of real financial performance investors care about and what's simply an artifact of accounting practices.[2]
So, if the bubble pops, will that lead people who currently have a much higher estimation than I do of LLMs' current capabilities and near-term prospects to lower that estimation? If AI investment turns out to be a bubble, and it pops, would you change your mind about near-term AGI? Would you think it's much less likely? Would you think AGI is probably much farther away?
- ^
Edited on October 22, 2025 at 2:35pm Eastern to add: Toby Ord, a philosopher at Oxford and a co-founder of Giving What We Can, just published a very compelling post about LLM scaling that I highly recommend reading.
- ^
I always warn people who want to get into stock picking that you should have a very high bar for second-guessing the market. I also agree with the standard advice that trying to time the market is incredibly risky and most likely unwise in any instance. So, that's the caveat.
I will note, however, that the amount of concentration into AI in the S&P 500 seems to have reduced its diversification by a worrying amount. In the past, it felt like quibbling around the margins to talk about the difference between an S&P 500 market index fund and funds that track the performance of a broad international basket of stocks, including small-cap stocks. Now, I worry about people who have all their money in the S&P 500. But this is not investment advice and you should talk to a professional if you can — ideally one who has a fiduciary duty to you and doesn't have a conflict of interest (e.g. is incentivized to sell you mutual funds with expensive fees).
Jack_S🔸 @ 2025-10-22T09:17 (+9)
I think there are two categories of answer here: 1) Finance as an input towards AGI, and 2) Finance as an indicator of AGI.
For 1) regardless of whether you think current LLM-based AI has fundamental flaws or not, the fact that insane amounts of capital are going into 5+ competing companies providing commonly-used AI products should be strong evidence that the economics are looking good, and that if AGI is technically possible using something like current tech, then all the incentives and resources are in place to find the appropriate architectures. If suddenly the bubble were to completely burst, even if we believed strongly that LLM-based AGI is imminent, there might be no more free money, so we'd now have an economic bottleneck to training new models. In this scenario, we'd have to update our timelines/estimates significantly (especially if you think straightforward scaling is a our likely pathway to AGI).
For 2), probably not - depends on the situation. Financial markets are fickle enough that the bubble could pop for a bunch of reasons unrelated to current model trends - rare-earth export controls having an impact, slightly lower uptake figures, the decision of one struggling player (e.g. Meta) to leave the LLM space, or one highly-hyped but ultimately disappointing application, for example. If I was unsure of the reason, would I assume that the market knows something I don't? Probably not. I might update slightly, but I'm not sure to what extent I'd trust the market to provide valuable information about AGI more than direct information about model capabilities and diffusion.
But of course, if we do update on market shifts, it has to be at least somewhat symmetrical. If a market collapse would slow down your timelines, insane market growth should accelerate your timelines for the same reason.
Yarrow Bouchard🔸 @ 2025-10-22T13:03 (+1)
the fact that insane amounts of capital are going into 5+ competing companies providing commonly-used AI products should be strong evidence that the economics are looking good
Can you clarify what you mean by "the economics are looking good”? The economics of what are looking good for what?
I can think of a few different things this could mean, such as:
- The amount of capital invested, the number of companies investing, and the number of users of AI products indicates there is no AI bubble
- The amount of capital invested (and the competition) is making AGI more likely/making it come sooner, primarily because of scaling
- The amount of capital invested (and the competition) is making AGI more likely/making it come sooner, primarily because it provides funding for research
Those aren’t the only possible interpretations, but those are three I thought of.
if AGI is technically possible using something like current tech, then all the incentives and resources are in place to find the appropriate architectures.
You’re talking about research rather than scaling here, right? Do you think there is more funding for fundamental AI research now than in 2020? What about for non-LLM fundamental AI research?
The impression I get is that the vast majority of the capital is going into infrastructure (i.e. data centres) and R&D for ideas that can quickly be productized. I recall that the AI researcher/engineer Andrej Karpathy rejoined OpenAI (his previous employer) after leaving Tesla, but ended up leaving OpenAI after not too long because the company wanted him to work on product rather than on fundamental research.
Matrice Jacobine @ 2025-10-22T14:40 (+3)
You’re talking about research rather than scaling here, right? Do you think there is more funding for fundamental AI research now than in 2020? What about for non-LLM fundamental AI research?
Most of OpenAI’s 2024 compute went to experiments
Yarrow Bouchard🔸 @ 2025-10-22T16:22 (+3)
This is what Epoch AI says about its estimates:
Based on our compute and cost estimates for OpenAI’s released models from Q2 2024 through Q1 2025, the majority of OpenAI’s R&D compute in 2024 was likely allocated to research, experimental training runs, or training runs for unreleased models, rather than the final, primary training runs of released models like GPT-4.5, GPT-4o, and o3.
That's kind of interesting in its own right, but I wouldn't say that money allocated toward training compute for LLMs is the same idea as money allocated to fundamental AI research, if that's what you were intending to say.
It's uncontroversial that OpenAI spends a lot on research, but I'm trying to draw a distinction between fundamental research, which, to me, connotes things that are more risky, uncertain, speculative, explorative, and may take a long time to pay off, and research that can be quickly productized.
I don't understand the details of what Epoch AI is trying to say, but I would be curious to learn.
Do unreleased models include as-yet unreleased models such as GPT-5? (The timeframe is 2024 and OpenAI didn't release GPT-5 until 2025.) Would it also include o4? (Is there still going to be an o4?) Or is it specifically models that are never intended to be released? I'm guessing it's just everything that hasn't been released yet, since I don't know how Epoch AI would have any insight into what OpenAI intends to release or not.
I'm also curious how much trial and error goes into training for LLMs. Does OpenAI often abort training runs or find the results to be disappointing? How many partial or full training runs go into training one model? For example, what percentage of the overall cost is the $400 million estimated for the final training run of GPT-4.5? 100%? 90%? 50%? 10%?
Overall, this estimate from Epoch AI doesn't seem to tell us much about what amount of money or compute OpenAI is allocating to fundamental research vs. R&D that can quickly be productized.
Jack_S🔸 @ 2025-10-22T18:55 (+1)
When I say “the economics are looking good,” I mean that the conditions for capital allocation towards AGI-relevant work are strong. Enormous investment inflows, a bunch of well-capitalised competitors, and mass adoption of AI products means that, if someone has a good idea to build AGI within or around these labs, the money is there. It seems this is a trivial point - if there were significantly less capital, then labs couldn’t afford extensive R&D, hardware or large-scale training runs.
WRT Scaling vs. fundamental research, obviously "fundamental research" is a bit fuzzy, but it's pretty clear that labs are doing a bit of everything. DeepMind is the most transparent about this, they're doing Gemini-related model research, Fundamental science, AI theory and safety etc. and have published thousands of papers. But I'm sure a significant proportion of OpenAI & Anthropic's work can also be classed as fundamental research.
Yarrow Bouchard🔸 @ 2025-10-22T22:24 (+3)
The overall concept we're talking about here is to what extent the outlandish amount of capital that's being invested in AI has increased budgets for fundamental AI research. My sense of this is that it's an open question without a clear answer.
DeepMind has always been doing fundamental research, but I actually don't know if that has significantly increased in the last few years. For all I know, it may have even decreased after Google merged Google Brain and DeepMind and seemed to shift focus away from fundamental research and toward productization.
I don't really know, and these companies are opaque and secretive about what they're doing, but my vague impression is that ~99% of the capital invested in AI over the last three years is going toward productizing LLMs, and I'm not sure it's significantly easier to get funding for fundamental AI research now than it was three years ago. For all I know, it's harder.
My impression is from anecdotes from AI researchers. I already mentioned Andrej Karpathy saying that he wanted to do fundamental AI research at OpenAI when he re-joined in early 2023, but the company wanted him to focus on product. I got the impression he was disappointed and I think this is a reason he ultimately quit a year later. My understanding is that during his previous stint at OpenAI, he had more freedom to do exploratory research.
The Turing Award-winning researcher Richard Sutton said in an interview something along the lines of no one wants to fund basic research or it's hard to get money to do basic research. Sutton personally can get funding because of his renown, but I don't know about lesser-known researchers.
A similar sentiment was expressed by the AI researcher François Chollet here:
Now LLMs have sucked the oxygen out of the room. Everyone is just doing LLMs. I see LLMs as more of an off-ramp on the path to AGI actually. All these new resources are actually going to LLMs instead of everything else they could be going to.
If you look further into the past to like 2015 or 2016, there were like a thousand times fewer people doing AI back then. Yet the rate of progress was higher because people were exploring more directions. The world felt more open-ended. You could just go and try. You could have a cool idea of a launch, try it, and get some interesting results. There was this energy. Now everyone is very much doing some variation of the same thing.
Undoubtedly, there is an outrageous amount of money going toward LLM research that can be quickly productized, toward scaling LLM training, and towards LLM deployment. Initially, I thought this meant the AI labs would spend a lot more money on basic research. I was surprised each time I heard someone such as Karpathy, Sutton, or Chollet giving evidence in the opposite direction.
It's hard to know what's the God's honest truth and what's bluster from Anthropic, but if they honestly believe that they will create AGI in 2026 or 2027, as Dario Amodei has seemed to say, and if they believe they will achieve this mainly by scaling LLMs, then why would they invest much money in basic research that's not related to LLMs or scaling them and that, even if it succeeds, probably won't be productizable for at least 3 years? Investing in diverse basic research would be hedging their bets. Maybe they are, or maybe they're so confident that they feel they don't have to. I don't know.