The Prospect of an AI Winter

By Erich_Grunewald 🔸 @ 2023-03-27T20:55 (+56)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
Matthew_Barnett @ 2023-03-28T20:24 (+22)

Algorithmic progress on ImageNet seems to effectively halve compute requirements every 4 to 25 months (Erdil and Besiroglu 2022); assume that the doubling time is 50% longer for transformers.

I think it's important not to take the trend in algorithmic progress too literally. At the moment, we only really know the rate for computer vision, which might be very different than for other tasks. The confidence interval is also quite wide, as you mentioned (the 5th percentile is 4 months and 95th percentile is 25 months). And algorithmic progress is plausibly driven by increasing algorithmic experimentation over time, which might become bottlenecked after either the relevant pool of research talent is exhausted or we reach hardware constraints. For these reasons, I have wide uncertainty regarding the rate of general algorithmic progress in the future.

In my experience, fast algorithmic progress is often the component that yields short timelines in compute-centric models. And yet, both the rate and the mechanism behind algorithmic progress is very poorly understood. Extrapolating this rate naively gives a false impression of high confidence in the future, in my opinion. Assuming that the rate is exogenous gives the arguably false impression that we can't do much to change it. I would be very careful before interpreting these results.

aogara @ 2023-03-28T00:14 (+17)

Nicely written, these make a lot of sense to me. My case for AI winter would focus on two tailwinds that will likely cease by the end of the decade: money and data. 

  1. Money can't continue scaling like this. Spending on training runs has gone up by about an order of magnitude every two years over the last decade. By 2032 this trend would put us at $100B training runs, which would be 3x Google's entire R&D budget. If TAI doesn't emerge before then, spending growth will need to slow down, likely slowing AI progress as well. 
  2. Maybe data can't either. Full analysis here, but basically high quality language data will run out well before 2030, possibly within the next year or two. But there are other kinds of data that could continue scaling. For example, the Epoch report discusses low quality language data like private texts and emails as one possibility. I would look more to the transition from language models to multimodal vision-and-language models as an important trend not only because vision is a useful modality, but because it would allow data scaling to continue.

I'd like to have a better view of questions about the continuation of Moore's Law. Without a full writeup, the claims about Moore's Law ending seem more credible now than in the past. I would be really interested in an aggregation of historical forecasts about Moore's Law to see whether the current doomsaying is any different from the long run trend. 

NickLaing @ 2023-03-28T08:21 (+5)

On the data front, it seems like Chat GPT and other AIs don't have access to the mass of peer reviewed journals yet. Obviously this isn't (relatively speaking) a huge quantity of data, but the quality would be orders of magnitude higher than what they are looking at now. Could access to these change things much at all?

aogara @ 2023-03-28T08:50 (+19)

That’s a reasonable point, but I don’t think peer reviewed journals would make much difference.

The Pile (https://arxiv.org/pdf/2101.00027.pdf) is a large dataset used for training lots of models. It includes the academic datasets Arxiv, FreeLaw, and PubMed Central, which contain 50GB, 50GB, and 100GB of data respectively. Table 7 says each byte is ~0.2 tokens, so that’s about 40B tokens to represent a good chunk of the academic literature on several subjects. If we had a similarly-sized influx of peer reviewed journals, would that change the data picture?

Chinchilla, a state of the art language model released by DeepMind one year ago, was trained on ~1.4T tokens. Only four years prior, BERT was a SOTA model trained on ~6B tokens. If we assume the Pile includes only 10% of existing academic literature, then peer reviewed journals could represent a 400B token influx that would increase available data by 25% over Chinchilla. This would meaningfully expand the dataset, but not by the orders of magnitude necessary to sustain scaling for months and years.

NickLaing @ 2023-03-28T11:01 (+7)

Wow what a great answer appreciate it! 

aogara @ 2024-02-13T16:07 (+2)

Money can't continue scaling like this.

Or can it? https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0

titotal @ 2023-03-27T21:50 (+8)

First of all, great post, thanks for exploring this topic! 

So I'm a little confused about the definition here:

AI winter is operationalised as a drawdown in annual global AI investment of ≥50%

 I would guess that the burst of the dot-com bubble meets this definition? But I wouldn't exactly call 2002-2010 an "internet winter": useage kept growing and growing, just with a better understanding of what you can and can't profit from. I think theres a good chance (>30%) of this particular definition of "AI winter" occurring, but I also reckon if it happens, people will feel like it's unfair to characterize it as such. 

I think a more likely outcome is a kind of "AI autumn": Investment keeps coming at a steady rate, and lots and lots of people are using AI for the things it's good at, but the number of advancements slows significantly, and certain problems prove intractable, and the hype dies down. I think we've already seen this process happen for Autonomous Vehicles. I think this scenario is very likely. 

NickLaing @ 2023-03-28T08:25 (+4)

I would put a huge reduction in investment as way higher than 30% - investment cycles boom and bust as does the economy. Even a global recession or similar could massively reduce AI expenditure while AI development continued marching on at a similar or only slightly reduced rate.

On the other hand the current crypto winter does match the OPs definition, with practical use of crypto reducing along with investment reducing.

In general though I agree with you that looking at investment figures isn't a robust way to define a "winter".

Erich_Grunewald @ 2023-03-28T17:47 (+2)

Thanks, that's a good observation -- you're right that this is a permissive operationalisation. I actually deliberately did that to be more "charitable" to Eden -- to say, "AI winter seems pretty unlikely even on these pretty conservative assumptions", but I should probably have flagged that more clearly. I agree that there are some scenarios where a 50% drawdown happens but there's no real winter worthy of the name.

Another way of putting this is, I thought I'd get pushback along the lines of "this is way too bullish on AI progress" (and I did get some of that, but not a lot), and instead got lots of pushback in the form of "this is way too bullish on AI winter". (Not talking about the EA forum here, but other places.)

I think a more likely outcome is a kind of "AI autumn": Investment keeps coming at a steady rate, and lots and lots of people are using AI for the things it's good at, but the number of advancements slows significantly, and certain problems prove intractable, and the hype dies down. I think we've already seen this process happen for Autonomous Vehicles. I think this scenario is very likely. 

Agree that this is a live possibility. (But I also don't think there's been a 50% drawdown in autonomous driving investment, so I don't think my operationalisation fails there.)

MaxRa @ 2023-03-27T22:16 (+4)

Nice post, found this pretty well written and convincing (though I already shared the bottom line, just less firmly). 

Random thoughts:

A severe extreme geopolitical tail event, such as a great power conflict between the US and China, may occur.

What type of great power conflict do you have in mind here? "Extreme tail event" makes it sound like you're thinking of a fairly large scale war, but great power conflict seems to refer to any military confrontation. E.g. I haven't at all wrapped my head around a military confrontation between China and the US over Taiwan yet, and Metaculus is at ~20% for 

Will armed conflicts between the Republic of China (Taiwan) and the People's Republic of China (PRC) lead to at least 100 deaths before 2026?

Also, I wonder if you have considered any potential craziness that happens after conditional on development of TAI before 2030. E.g. say TAI is developed in 2027, maybe the plausible set of scenarios for 2028 and 2029 include sufficiently many scenarios where we see a >50% decrease in AI funding such that I might want to increase your bottom line forecast?

constructive @ 2023-03-28T07:49 (+5)

(Uncertain) My guess would be that a global conflict would increase AI investment considerably, as (I think) R&D typically increases in war times. And AI may turn out to be particularly strategically relevant. 

NickLaing @ 2023-03-28T08:28 (+2)

Agreed looking historically as well there's every reason to think that war is more likely to accellerate technology development. In this case as well alignment focus is likely to disappear completely if there is a serious war. 

Dem drones will be unleashed with the most advanced AI software, safety be damned.

Erich_Grunewald @ 2023-03-28T18:16 (+4)

What type of great power conflict do you have in mind here? "Extreme tail event" makes it sound like you're thinking of a fairly large scale war, but great power conflict seems to refer to any military confrontation. E.g. I haven't at all wrapped my head around a military confrontation between China and the US over Taiwan yet, and Metaculus is at ~20% for 

Yeah that's an interesting question. I guess what I had in mind here was the US and China basically destroying each others' fabs or something along those lines (a compute shortage would make investment in AI labs less profitable, perhaps). But even that could increase investment as they strive to rebuild capacities? Maybe the extreme tail event that'd cause this is perpetual world peace happening!