The International PauseAI Protest: Activism under uncertainty

By Joseph Miller, Holly Elmore ⏸️ 🔸, joepio @ 2023-10-12T17:36 (+129)

This post is an attempt to summarize the crucial considerations for an AI pause and AI pause advocacy. It is also a promotion for the International PauseAI protest, 21 October, the biggest AI protest ever, held in 7 countries on the same day. The aim of this post is to present an unbiased view, but obviously that may not be the case.

You can check out the EA Forum event page for the protest here.

Seven Crucial Considerations for Pausing AI

Under the default scenario, is the risk from AI acceptable?

This is the only question where I'm confident enough to say the answer is clearly "no". The remaining question is whether a pause would decrease the risk.

 

How dangerous is hardware overhang in a pause?

 

How much does AI capability progress help alignment?

 

Can we pause algorithmic capabilities research while allowing alignment research?

 

Is there a good, plausible political mechanism through which to pause?

 

Does open-source AI make an effective pause impossible?

 

A meta consideration that depends on the answers to the previous questions:

Are the benefits of more time for research and regulation greater than the costs of a pause?

If alignment research just needs more time to progress and we're currently on the cusp of AGI, then it might be worth risking hardware overhang in order to gain that time. If alignment research progress depends upon having advanced AI then hardware overhang could mean we miss the most crucial time for alignment research.

 

Six Crucial Considerations for Pause Protests

While it is important to consider whether a pause would be good, this does not necessarily determine whether pause advocacy and protests are good. For example, while a pause might be bad due to hardware overhang, it might be good to call for a pause, because it will force companies to try harder on safety. Or conversely, a pause might be good because it gives us time to do alignment research, but pause advocacy might make companies hostile to x-risk concerns.

 

Will pause advocacy create polarization around the issue of AI safety?

 

Will pause advocacy harm other AI safety efforts?

 

How effective is advocacy generally?

 

Is pause advocacy more robust to safety-washing than other efforts?

 

Is Pause advocacy worth the time of the AI Safety community (assuming it is useful)?

 

Another meta consideration:

Is it bad to apply consequentialist reasoning to public advocacy?

 

Perhaps PauseAI protests are not really about advocating for a pause so much as communicating the severity of the risk of AI. In which case it might be better for AI activists to focus on a different message . On the other hand perhaps "Pause AI" is the most accurate way to convey our beliefs when we are trying to send a message to the entire world.

What you cannot do is not decide. You will either attend the protest or not.

The International PauseAI protest will be held on 21 October in 7 countries.

Prediction markets

Thanks to Gideon Futerman for feedback on this post.


SiebeRozendal @ 2023-10-16T13:57 (+4)

What do you mean in the first question by "the default scenario"?

Joseph Miller @ 2023-10-19T11:53 (+5)

I mean something like "the scenario where there is no pause and also no other development that currently seems very unlikely and changes the level of risk dramatically (eg. a massive breakthrough in human brain emulation next year)."

SummaryBot @ 2023-10-12T21:03 (+2)

Executive summary: The PauseAI protest movement aims to raise awareness of AI safety issues, but the efficacy and impact of advocating for an AI pause remain highly uncertain.

Key points:

  1. There are many open questions around whether pausing AI progress would reduce existential risk, or exacerbate it through hardware overhang. The balance of considerations is unclear.
  2. Pause advocacy could potentially polarize the issue, harm existing AI safety efforts, and negatively impact the reputation of the field. But it may also expand the Overton window and provide social license for useful regulation. The impact is uncertain.
  3. The effectiveness of social movements and advocacy is debatable. AI safety issues may be fundamentally different from other causes.
  4. Participating in protests may be costly for individuals. The time spent may not be an efficient use of the community's resources.
  5. Applying consequentialist reasoning to advocacy has drawbacks. But choices must still be made about whether to participate.
  6. The core motivation seems to be communicating the severity of AI risk. "Pause AI" may or may not be the optimal message for this goal.

 


This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.