The Overton Window widens: Examples of AI risk in the media

By undefined @ 2023-03-23T17:10 (+112)

I sometimes talk to people who are nervous about expressing concerns that AI might overpower humanity. It’s a weird belief, and it might look too strange to talk about it publicly, and people might not take us seriously. 

How weird is it, though? Some observations (see Appendix for details):

Takeaway: We live in a world where mainstream news outlets, famous people, and the people who are literally leading AI companies are talking openly about AI x-risk.

I’m not saying that things are in great shape, or that these journalists/famous people/AI executives have things under control. I’m also not saying that all of this messaging has been high-quality or high-fidelity. I’m also not saying that there are never reputational concerns involved in talking about AI risk. 

But next time you’re assessing how weird you might look when you openly communicate about AI x-risk, or how outside the Overton Window it might be, remember that some of your weird beliefs have been profiled by major news outlets. And remember that some of your concerns have been echoed by people like Bill Gates, Stephen Hawking, and the people leading companies that are literally trying to build AGI

I’ll conclude with a somewhat more speculative interpretation: short-term and long-term risks from AI systems are becoming more mainstream. This is likely to keep happening, whether we want it to or not. The Overton Window is shifting (and in some ways, it’s already wider than it may seem). 

Appendix: Examples of AI risk arguments in the media/mainstream

I spent about an hour looking for examples of AI safety ideas in the media. I also asked my Facebook friends and some Slack channels. See below for examples. Feel free to add your own examples as comments.

Articles

This Changes Everything by Ezra Klein (NYT)

Why Uncontrollable AI Looks More Likely Than Ever (TIME Magazine)

The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter (TIME Magazine)

DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution (TIME Magazine)

AI ‘race to recklessness’ could have dire consequences, tech experts warn in new interview (NBC)

Elon Musk, who co-founded firm behind ChatGPT, warns A.I. is ‘one of the biggest risks’ to civilization (CNBC)

Elon Musk warns AI ‘one of biggest risks’ to civilization during ChatGPT’s risk (NY Post)

AI can be racist, sexist and creepy. What should we do about it? (CNN)

Are we racing toward AI catastrophe? By Kelsey Piper (Vox)

The case for slowing down AI By Sigal Samuel (Vox)

The Age of AI has begun by Bill Gates

Stephen Hawking warns artificial intelligence could end mankind (BBC). Note this one is from 2014; all the others are recent.

Quotes from executives at AI labs

Sam Altman, 2015

Sam Altman, 2023

Demis Hassabis, 2023

Anthropic, 2023

Miscellaneous

I am grateful to Zach Stein-Perlman for feedback, as well as several others for pointing out relevant examples. 

Recommended: Spreading messages to help the the most important century


Otto @ 2023-03-24T00:04 (+18)

Crossposting a comment: As co-author of one of the mentioned pieces, I'd say it's really great to see the AGI xrisk message mainstreaming. It doesn't nearly go fast enough, though. Some (Hawking, Bostrom, Musk) have already spoken out about the topic for close to a decade. So far, that hasn't been enough to change common understanding. Those, such as myself, who hope that some form of coordination could save us, should give all they have to make this go faster. Additionally, those who think regulation could work should work on robust regulation proposals which are currently lacking. And those who can should work on international coordination, which is currently also lacking.

A lot of work to be done. But the good news is that the window of opportunity is opening, and a lot of people could work on this which currently aren't. This could be a path to victory.

MaxRa @ 2023-03-24T12:05 (+9)

Relatedly, was pretty surprised by the results from this Twitter poll by Lex Friedman from yesterday:

Greg_Colbourn @ 2023-03-24T14:02 (+4)

Wow. I know a lot of his audience are technophiles, but that is a pretty big sample size!

Greg_Colbourn @ 2023-03-24T14:06 (+3)

Would be good to see a breakdown of 1-10 into 1-5 and 5-10 years. And he should also do one on x-risk from AI (especially aimed at all those who answered 1-10 years).

DC @ 2023-03-24T15:51 (+2)

That's a huge temperature shift!

SebastianSchmidt @ 2023-04-05T00:11 (+1)

Might be overly short due to the recent advancements and recency bias (e.g., would be interesting to see in a few weeks) but that's a massive sample size!

Geoffrey Miller @ 2023-03-23T22:10 (+6)

Akash - thanks for the helpful compilation of recent articles and quotes. I think you're right that the Overton window is broadening a bit more to include serious discussions of AI X-risk. (BTW, for anybody who's familiar with contemporary Chinese culture, I'd love to know whether there are parallel developments in Chinese news media, social media, etc.)

The irony here is that the general public for many decades has seen depictions of AI X-risk in some of the most popular science fiction movies, TV shows, and novels ever made -- including huge global blockbusters, such as 2001: A space odyssey (1968), The Terminator (1984), and Avengers: Age of Ultron (2012). But I guess most people compartmentalized those cultural touchstones into 'just science fiction' rather than 'somewhat over-dramatized depictions of potential real-world dangers'? 

My suspicion is that lots of 'wordcel' mainstream journalists who didn't take science fiction seriously do tend to take pronouncements from tech billionaires and top scientists seriously. But, IMHO, that's quite unfortunate, and it reveals an important failure mode of modern media/intellectual culture -- which is to treat science fiction as if it's trivial entertainment, rather than one of our species' most powerful ways to explore the implications of emerging technologies. 

One takeaway might be, when EAs are discussing these issues with people, it might be helpful to get a sense of their views on science fiction -- e.g. whether they lean towards dismissing emerging technologies as 'just science fiction', or whether they lean towards taking them more seriously because science fiction has taken them seriously. For example, do they treat 'Ex Machina' (2014) as an reason for dismissing AI risks, or as reason for understanding AI risks more deeply? 

In public relations and public outreach, it's important to 'know one's audience' and to 'target one's market'; I think this dimension of 'how seriously people take science fiction' is probably a key individual-differences trait that's worth considering when doing writing, interviews, videos, podcasts, etc.

levin @ 2023-03-23T18:02 (+6)

A couple more entries from the last couple days: Kelsey Piper's great Ezra Klein podcast appearance, and, less important but closer to home, The Crimson's positive coverage of HAIST.

Darren McKee @ 2023-03-24T11:07 (+4)

Thanks for the compilation! This might be helpful for the book I'm writing.  
One of my aspirations was to throw a brick through the overton window regarding AI safety, but things have already changed with more and more stuff coming out like what you've listed. 

JP Addison @ 2023-03-30T13:14 (+3)

I would encourage anyone interested in how the discussion is shifting to listen to Sam Altman on  the Lex Fridman podcast. It is simply astonishing to me how much half their talking points are basically longtermist-style talking points. And yet!

Sam Bogerd @ 2023-03-25T19:48 (+1)

It might be helpful to have a list of these articles on a separate website somewhere, could be a good resource to link to if it is unconnected to EA / LW.