Protesting Now for AI Regulation might be more Impactful than AI Safety Research

By Nicolae @ 2025-04-13T02:11 (+64)

After reading the comments on this EA Reddit post about the recent 80,000 Hours newsletter and similar stories on the EA Forum, about how difficult it is to secure a job in AI safety, even with relevant credentials and experience, I remembered an AMA with Peter Singer. When asked what he'd do today if he were in his twenties and wanted to significantly help the world, Singer responded: “I'm not sure that I'd be a philosopher today. When I was in my twenties, practical ethics was virtually a new field, and there was a lot to be done. […] Now there are many very good people working in practical ethics, and it is harder to have an impact. Perhaps I would become a full-time campaigner, either for effective altruism in general, or more specifically, against factory farming”.

This got me thinking: Isn't AI safety facing a similar situation? There are already many skilled and highly capable people working directly in AI safety research and policy, making it increasingly difficult for newcomers to have a significant impact. Hundreds of books and thousands of papers have already been written on this topic and, having done a fair amount of reading on autonomous weapons myself, let me tell you if you don’t already know, much of it is rehashed material, with occasional novel ideas here and there.

If you've spent months, or even years, unsuccessfully trying to land an AI safety role, consider for a moment that you're essentially competing with hundreds of other skilled AI researchers to contribute to papers or reports that might, at best, result in minor amendments to policies that are largely drafted but not implemented. In many ways, probably the bulk of the urgent research has already been done, but without implementation, it remains worthless. 

AI policy research will likely accelerate over the next few years, not only because of highly skilled and motivated people who are rushing in, but also because AI itself will increasingly assist with policymaking. On the other hand, AI won’t take to the streets with banners, chanting and demanding it’s own regulation.

Oops! It looks like it already is :)    https://www.stopkillerrobots.org/news/new-european-poll-shows-73-favour-banning-killer-robots

                                 

For all the AI safety laypeople, wouldn’t it make more sense to focus on activism, which is currently almost nonexistent, and begin protesting Jody Williams style? The same way she and the International Campaign to Ban Landmines successfully campaigned in the 1990s, leading to the 1997 Ottawa Treaty banning anti-personnel mines and ultimately earning the Nobel Peace Prize.

Jody Williams demonstrating against war, Washington, DC, 2003
Courtesy Linda Panetta, Optical Realities Photography

 

For all the AI safety researchers, why not take to the streets as well? Knowledgeable voices are urgently needed beyond academia, think tanks, or AI labs.


Peter @ 2025-04-13T23:53 (+14)

I'm not sure the policies have been mostly worked out but not implemented. Figuring out technical AI governance solutions seems like a big part of what is needed. 

bhrdwj🔸 @ 2025-04-17T08:24 (+3)

I think there's an intersection between the PauseAI kind of stuff, and a great-powers reconciliation movement.

Most of my scenario-forecast likelihood-mass, where the scenarios feature near-term mass-death situations, exist in this intersection between great-power cold-wars, proxy-wars in the global-south, AI brinkmanship, and asymmetrical biowarfare.

Maybe combining PauseAI with a 🇺🇸/🇨🇳 reconciliation and collaboration movement, would be a more credible orientation.