The Overton Window widens: Examples of AI risk in the media
By Akash @ 2023-03-23T17:10 (+112)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullOtto @ 2023-03-24T00:04 (+18)
Crossposting a comment: As co-author of one of the mentioned pieces, I'd say it's really great to see the AGI xrisk message mainstreaming. It doesn't nearly go fast enough, though. Some (Hawking, Bostrom, Musk) have already spoken out about the topic for close to a decade. So far, that hasn't been enough to change common understanding. Those, such as myself, who hope that some form of coordination could save us, should give all they have to make this go faster. Additionally, those who think regulation could work should work on robust regulation proposals which are currently lacking. And those who can should work on international coordination, which is currently also lacking.
A lot of work to be done. But the good news is that the window of opportunity is opening, and a lot of people could work on this which currently aren't. This could be a path to victory.
MaxRa @ 2023-03-24T12:05 (+9)
Relatedly, was pretty surprised by the results from this Twitter poll by Lex Friedman from yesterday:
Greg_Colbourn @ 2023-03-24T14:02 (+4)
Wow. I know a lot of his audience are technophiles, but that is a pretty big sample size!
Greg_Colbourn @ 2023-03-24T14:06 (+3)
Would be good to see a breakdown of 1-10 into 1-5 and 5-10 years. And he should also do one on x-risk from AI (especially aimed at all those who answered 1-10 years).
DC @ 2023-03-24T15:51 (+2)
That's a huge temperature shift!
SebastianSchmidt @ 2023-04-05T00:11 (+1)
Might be overly short due to the recent advancements and recency bias (e.g., would be interesting to see in a few weeks) but that's a massive sample size!
Geoffrey Miller @ 2023-03-23T22:10 (+6)
Akash - thanks for the helpful compilation of recent articles and quotes. I think you're right that the Overton window is broadening a bit more to include serious discussions of AI X-risk. (BTW, for anybody who's familiar with contemporary Chinese culture, I'd love to know whether there are parallel developments in Chinese news media, social media, etc.)
The irony here is that the general public for many decades has seen depictions of AI X-risk in some of the most popular science fiction movies, TV shows, and novels ever made -- including huge global blockbusters, such as 2001: A space odyssey (1968), The Terminator (1984), and Avengers: Age of Ultron (2012). But I guess most people compartmentalized those cultural touchstones into 'just science fiction' rather than 'somewhat over-dramatized depictions of potential real-world dangers'?
My suspicion is that lots of 'wordcel' mainstream journalists who didn't take science fiction seriously do tend to take pronouncements from tech billionaires and top scientists seriously. But, IMHO, that's quite unfortunate, and it reveals an important failure mode of modern media/intellectual culture -- which is to treat science fiction as if it's trivial entertainment, rather than one of our species' most powerful ways to explore the implications of emerging technologies.
One takeaway might be, when EAs are discussing these issues with people, it might be helpful to get a sense of their views on science fiction -- e.g. whether they lean towards dismissing emerging technologies as 'just science fiction', or whether they lean towards taking them more seriously because science fiction has taken them seriously. For example, do they treat 'Ex Machina' (2014) as an reason for dismissing AI risks, or as reason for understanding AI risks more deeply?
In public relations and public outreach, it's important to 'know one's audience' and to 'target one's market'; I think this dimension of 'how seriously people take science fiction' is probably a key individual-differences trait that's worth considering when doing writing, interviews, videos, podcasts, etc.
levin @ 2023-03-23T18:02 (+6)
A couple more entries from the last couple days: Kelsey Piper's great Ezra Klein podcast appearance, and, less important but closer to home, The Crimson's positive coverage of HAIST.
Darren McKee @ 2023-03-24T11:07 (+4)
Thanks for the compilation! This might be helpful for the book I'm writing.
One of my aspirations was to throw a brick through the overton window regarding AI safety, but things have already changed with more and more stuff coming out like what you've listed.
JP Addison @ 2023-03-30T13:14 (+3)
I would encourage anyone interested in how the discussion is shifting to listen to Sam Altman on the Lex Fridman podcast. It is simply astonishing to me how much half their talking points are basically longtermist-style talking points. And yet!
Sam Bogerd @ 2023-03-25T19:48 (+1)
It might be helpful to have a list of these articles on a separate website somewhere, could be a good resource to link to if it is unconnected to EA / LW.