Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures

By Center for AI Safety @ 2023-05-30T09:06 (+427)

This is a linkpost to https://www.safe.ai/statement-on-ai-risk

Today, the AI Extinction Statement was released by the Center for AI Safetya one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.

Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).

The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.


JulianHazell @ 2023-05-30T09:54 (+93)

Thank you to the CAIS team (and other colleagues) for putting this together. This is such a valuable contribution to the broader AI risk discourse.

Devin Kalish @ 2023-05-30T18:24 (+23)

I'm really heartened by this, especially some of the names on here I independently admired who haven't been super vocal about the issue yet, like David Chalmers, Bill McKibben, and Audrey Tang. I also like certain aspects of this letter better than the FLI one. Since it focuses specifically on relevant public figures, rapid verification is easier and people are less overwhelmed by sheer numbers. Since it focuses on an extremely simple but extremely important statement it's easier to get a broad coalition on board and for discourse about it to stay on topic. I liked the FLI one overall as well, I signed it myself and think it genuinely helped the discourse, but if nothing else this seems like a valuable supplement.

jackva @ 2023-05-30T15:02 (+18)

Very cool!

I am surprised you did not mention climate since this is the one major risk where we are doing a good job (i.e. if we paying as much attention to AI as to future pandemics and nuclear risk this isn't very reassuring, as it seems these are major risks there are not well addressed / massively underresourced compared to importance).

ShayBenMoshe @ 2023-05-30T17:13 (+51)

I, for one, think that it is good that climate change was not mentioned. Not necessarily because there are no analogies and lessons to be drawn, but rather because it can more easily be misinterpreted. I think that the kind of actions and risks are much more similar to bio and nuclear, in that there are way less actors and, at least for now, it is much less integrated to day-to-day life. Moreover, in many scenarios, the risk itself is of more abrupt and binary nature (though of course not completely so), rather than a very long and gradual process. I'd be worried that comparing AI safety to climate change would be easily misinterpreted or dismissed by irrelevant claims.

Linch @ 2023-05-30T23:07 (+26)

At least in the US, I'd worry that comparisons to climate change will get you attacked by ideologues from both of the main political sides (vitriol from the left because they'll see it as evidence that you don't care enough about climate change, vitriol from the right because they'll see it as evidence that AI risk is as fake/political as climate change).

Neel Nanda @ 2023-05-31T22:20 (+7)

IMO it was tactically correct to not mention climate. The point of the letter is to get wide support, and I think many people would not be willing to put AI X-Risk on par with climate

jackva @ 2023-05-31T22:40 (+4)

Yeah, I can see that though it is a strange world where we treat nuclear and pandemics as second-order risks.

Gabriel Mukobi @ 2023-05-30T16:45 (+6)

climate since this is the one major risk where we are doing a good job

Perhaps (at least in the United States) we haven't been doing a very good job on the communication front for climate change, as there are many social circles where climate change denial has been normalized and the issue has become very politically polarized with many politicians turning climate change from an empirical scientific problem into a political "us vs them" problem.

Linch @ 2023-05-31T05:03 (+5)

since this is the one major risk where we are doing a good job

What about ozone layer depletion?

jackva @ 2023-05-31T10:40 (+4)

Not a current major risk, but also turned out to be trivially easy to solve with minimal societal resources (technological substitution was already available when regulated, only needed regulating a couple of hundred factories in select countries), so does not feel like it belongs in the class of major risks.

Linch @ 2023-05-31T23:59 (+4)

I disagree, I think major risks should be defined in terms of their potential impact sans intervention, rather than taking tractability into account (negatively). 

Incidentally there was some earlier speculation of what counterfactually might happen if we had invented CFCs a century earlier, which you might find interesting.

jackva @ 2023-06-01T06:55 (+3)

I think we're talking past each other.

While I also disagree that we should ignore tractability for the purpose you indicate, the main point here is more "if we'd chose the ozone layer as an analogy we are suggesting the problem is trivially easy" which doesn't really help with solving the problem and it already seems extremely likely that AI risk is much trickier than ozone layer depletion.

Chris Leong @ 2023-05-30T12:30 (+15)

This is exciting!

Do you have any thoughts on how the community should be following up on this?

Eli Rose @ 2023-05-30T23:48 (+13)

Made the front page of Hacker News. Here's the comments.

The most common pushback (and the first two comments, as of now) are from people who think this is an attempt at regulatory capture by the AI labs, though there's a good deal of pushback and (I thought) some surprisingly high-quality discussion.

Daniel_Eth @ 2023-06-01T23:27 (+15)

It seems relevant that most of the signatories are academics, where this criticism wouldn't make sense. @HaydnBelfield created a nice graphic here demonstrating this point.

Guy Raveh @ 2023-06-02T09:26 (+4)

I've also been trying this to people claiming financial interests. On the other hand, the tweet Haydn replied to actually makes another good point though, that does apply to professors - diverting attention to from societal risks that they're contributing to but can solve, to x-risk where they can mostly sign such statements and then go "🤷🏼‍♂️", shields them from having to change anything in practice.

Jamie_Harris @ 2023-06-10T06:23 (+5)

In the vein of "another good point" made in public reactions to the statement, an article I read in The Telegraph:

"Big tech’s faux warnings should be taken with a pinch of salt, for incumbent players have a vested interest in barriers to entry. Oppressive levels of regulation make for some of the biggest. For large companies with dominant market positions, regulatory overkill is manageable; costly compliance comes with the territory. But for new entrants it can be a killer."

This seems obvious with hindsight as one factor at play, but I hadn't considered it before reading it here. This doesn't address Daniel / Haydn's point though, of course.

https://www.telegraph.co.uk/business/2023/06/04/worry-climate-change-not-artificial-intelligence/

ClimateDoc @ 2023-05-31T17:29 (+2)

The most common pushback (and the first two comments, as of now) are from people who think this is an attempt at regulatory capture by the AI labs

 

This is also the case in the comments on this FT article (paywalled I think), which I guess indicates how less techy people may be tending to see it.

Lizka @ 2023-05-30T15:21 (+11)

Note that this was covered in the New York Times (paywalled) by Kevin Roose. I found it interesting to skim the comments. (Thanks for working on this, and sharing!) 

MaxRa @ 2023-05-30T13:03 (+7)

This is so awesome, thank you so much, I'm really glad this exists. The recent shift of experts publicly worrying about AI x-risks has been a significant update for me in terms of hoping humanity avoids losing control to AI.

(but notably not Meta)

Wondering how much I should update from Meta and other big tech firms not being represented on the list. Did you reach out to the signing individuals via your networks and maybe the network didn't reach some orgs as much? Maybe there are company policies in place that prevent employees from some firms from signing the statement? And is there something specific about Meta that I can read up on (besides Yann LeCun intransigence on Twitter :P)?

Jörg Weiß @ 2023-05-30T23:09 (+4)

I'm not sure, we can dismiss Yann LeCun's statements so easily; mostly, because I do not understand how Meta works. How influential is he there? Does he set general policy around things like AI risk?

I feel there is this unhealthy dynamic where he represents the leader of some kind of "anti-doomerism" – and I'm under the impression that he and his Twitter crowd do not engage with the arguments of the debate at all. I'm pretty much looking at this from the outside, but LeCun's arguments seem to be so far behind. If he drives Meta's AI safety policy, I'm honestly worried about that. Meta just doesn't seem to be an insignificant player.

ElliotJDavies @ 2023-05-31T09:53 (+5)

Huge appreciation to the CAIS team for the work put in here

Larks @ 2023-05-30T10:46 (+5)

Great work guys, thanks for organising this!

Max Görlitz @ 2023-06-01T11:51 (+3)

I'm mildly surprised that Elon Musk hasn't signed, given that he did sign the FLI 6-month pause open letter and has been vocal about being worried about AI x-risk for years.

Probably the simplest explanation for this is that the organizers of this statement haven't been able to reach him, or he just hasn't had time yet (although he should have heard about it by now?). 

Erich_Grunewald @ 2023-06-01T16:08 (+8)

I reckon there's a pretty good chance he didn't sign because he wasn't asked, because he's a controversial figure.

Max Görlitz @ 2023-06-02T08:31 (+1)

Yea, that could be the case, although I assume having Elon Musk sign could have generated 2x the publicity. Most news outlets seem to jump on everything he does. 

Not sure what the tradeoff between attention and controversy is for such a statement. 

MaxRa @ 2023-06-02T08:55 (+4)

Most news outlets seem to jump on everything he does.

That's where my thoughts went, maybe he and/or CAIS thought that the statement would have a higher impact if reporting focuses on other signatories. That Musk thinks AI is an x-risk seems fairly public knowledge anyways, so there's no big gain here.

Peter S. Park @ 2023-06-01T02:04 (+3)

Truly brilliant coalition-building by CAIS and collaborators. It is likely that the world has become a much safer place as a result. Congratulations!