[Linkpost] Situational Awareness - The Decade Ahead
By MathiasKB🔸 @ 2024-06-04T22:58 (+87)
This is a linkpost to https://situational-awareness.ai/
Leopold Aschenbrenner's newest series on Artificial Intelligence is really excellent. The series makes very strong claims, but I'm finding them to be well-argued with clear predictions. Curious to hear what people on this forum think.
The series' introduction:
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
Jan_Kulveit @ 2024-06-06T09:50 (+40)
(crossposted from twitter) Main thoughts:
1. Maps pull the territory
2. Beware what maps you summon
Leopold Aschenbrenners series of essays is a fascinating read: there is a ton of locally valid observations and arguments. Lot of the content is the type of stuff mostly discussed in private. Many of the high-level observations are correct.
At the same time, my overall impression is the set of maps sketched pulls toward existential catastrophe, and this is true not only for the 'this is how things can go wrong' part, but also for the 'this is how we solve things' part.
Leopold is likely aware of the this angle of criticism, and deflects it with 'this is just realism' and 'I don't wish things were like this, but they most likely are'. I basically don't buy that claim.
Rebecca @ 2024-06-07T07:08 (+9)
Can you say more about how you think the solving things part pulls towards x-risk?
Linch @ 2024-06-07T08:53 (+31)
I'm not Jan, but I think (paraphrasing) "Superintelligence will give godlike power and might kill us all. Our solution is that the good guys should race as fast as possible to build the artificial god at breakneck speed first, and then hope to align it with duct tape and prayer" should not, frankly, be your first resort strategy. If this becomes the US's/China's natsec community's first introduction to considerations around superintelligence or AGI or alignment etc, I think it will predictably increase x-risk by making the zero- (actually negative-) sum framing lodged in people's heads, before they stumble across other considerations.
David Mathers @ 2024-06-06T09:03 (+11)
Dan Hendryks thinks Aschenbrenner is overestimating the rate of improvement in algorithmic efficiency (https://twitter.com/DanHendrycks/status/1798177460028346722) and that at the actual rate we shouldn't expect an intelligence "explosion" (https://twitter.com/DanHendrycks/status/1798390576922178004). (I think "explosion" is a bit vague here, and it would be more interesting to get a sketch from Hendrycks of the actual rate of improvement he predicts. But I guess that wouldn't fit in one tweet.)
Linch @ 2024-06-06T06:22 (+9)
An interesting and scary read. I really hope he's wrong!
I do agree with him about some points like expecting and wanting governments to take over. (see here and here).
OscarD @ 2024-06-06T09:17 (+8)
fyi for everyone interested in Leopold's report but intimidated by it's length, I am currently writing a detailed summary, and expect to post it to the Forum in the next day or two. I will update this comment once I have done so.
OscarD @ 2024-06-08T21:07 (+4)
I have now published my summary.
(tagging people who reacted 'helpful': @MvK @michel @David Mathers @SamiM @JWS @Ariel Simnegar @DC @Ben Snodin @Mo Putera @EffectiveAdvocate )