Why aren't we promoting social media awareness of x-risks?
By Max Niederman🔸 @ 2025-06-09T14:22 (+8)
Videos like this one by Kurzgesagt show that it is not wholly impossible to present x-risks to a more general social media audience. It seems to me that content like this paired with a concrete call to action like contacting your representative or donating to some org could be very high impact.
There are also quite a few influencers with tech-savvy audiences and who already dislike AI. Recruting them to help with AI x-risk seems potentially very valuable.
Note that, although I've focused on AI x-risk, I think this is more generally applicable. If we want to convince people that x-risks are a problem, I think social media is probably the fastest and cheapest way to do that.
MichaelDickens @ 2025-06-09T18:26 (+12)
Some people are promoting social media awareness of x-risks, for example that Kurzgesagt video, which was funded by Open Philanthropy[1]. There's also Doom Debates, Robert Miles's YouTube channel, and some others. There are some media projects on Manifund too, for example this one.
If your question is, why aren't people doing more of that sort of thing? Then yeah, that's a good question. If I was the AI Safety Funding Czar, I would be allocating a bigger budget to media projects (both social media and traditional media).
There are two arguments against giving marginal funding to media projects that I actually believe:
- My guess is that public protests are more cost-effective right now, because (a) they're more neglected (b) they naturally generate media attention, and perhaps (c) they are more dramatic which leads people to take the AI x-risk problem more seriously.
- I also expect some kinds of policy work to be more cost-effective. There's already a lot of policy research happening but I think we need more (a) people talking honestly to policymakers about x-risk and (b) writing legislation targeted at reducing x-risk. Policy has the advantage that you don't need to change as many minds to have a large impact, but it has the disadvantage that those minds are particularly hard to change—a huge chunk of their job is listening to people saying "please pay attention to my issue", so you have a lot of competition.
There are other arguments that I don't believe, although I expect some people have arguments that have never even occurred to me. The main arguments I can think of that I don't find persuasive are
- It's hopeless to try to make AI safer via public opinion / the people developing AI don't care about public opinion.
- We should mainly fund technical research instead, e.g. because the technical problems in AI safety are more tractable.
- Public-facing messages will inevitably be misunderstood and distorted and we will end up in a worse place than where we started.
- If media projects succeed, then we will get regulations that slow down AI development, but we need to go as fast as possible to usher in the glorious transhumanist future or to beat China or whatever.
I don't know for sure that that specific video was part of the Open Philanthropy grant, but I'm guessing it was based on its content. ↩︎
Kamil Hasenfeller (K-1000) @ 2025-06-12T17:33 (+1)
There's a difference between long-termism, and OVERDOING long-termism. In flesh protest works, because it generate attention, and increases organisation of people.
You're never going be prepared to protest, if you don't protest.