If AI is in a bubble and the bubble bursts, what would you do?

By Remmelt @ 2024-08-19T10:56 (+28)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
huw @ 2024-08-19T23:43 (+7)

To me, it would make sense to use the lull in tech lobbying + popular dissent to lock in the general regulatory agenda that EA has been pushing for (ex. controls on large models, pre-harm enforcement, international treaties)

Remmelt @ 2024-08-20T04:20 (+2)

What are you thinking about in terms of pre-harm enforcement? 

I’m thinking about advising premarket approval – a requirement to scope model designs around prespecified uses and having independent auditors vet the safety tests and assessments.

Remmelt @ 2024-08-20T05:58 (+3)

To clarify for future reference, I do think it’s likely (80%+) that at some point over the next 5 years there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

Ie. I think we are heading for an AI winter. 
It is not sustainable for the industry to invest 600+ billion dollars per year in infrastructure and teams in return for relatively little revenue and no resulting profit for major AI labs.

At the same time, I think that within the next 20 years tech companies could both develop robotics that self-navigate multiple domains and have automated major sectors of physical work. That would put society on a path to causing total extinction of current life on Earth. We should do everything we can to prevent it.

Remmelt @ 2024-10-10T14:08 (+2)

To clarify for future reference, I do think it’s likely (80%+) that at some point over the next 5 years there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

Update: I now think this is 90%+ likely to happen (from original prediction date).

Remmelt @ 2024-08-23T00:35 (+2)

Igor Krawzcuk, an AI PhD researcher, just shared more specific predictions:

“I agree with ed that the next months are critical, and that the biggest players need to deliver. I think it will need to be plausible progress towards reasoning, as in planning, as in the type of stuff Prolog, SAT/SMT solvers etc. do.

I'm 80% certain that this literally can't be done efficiently with current LLM/RL techniques (last I looked at neural comb-opt vs solvers, it was _bad_), the only hope being the kitchen sink of scale, foundation models, solvers _and_ RL

If OpenAI/Anthropic/DeepMind can't deliver on promises of reasoning and planning (Q*, Strawberry, AlphaCode/AlphaProof etc.) in the coming months, or if they try to polish more turds into gold (e.g., coming out with GPT-Reasoner, but only for specific business domains) over the next year, then I would be surprised to see the investments last to make it happen in this AI summer.”
https://x.com/TheGermanPole/status/1826179777452994657