Debate series: should we push for a pause on the development of AI?

By Ben_West @ 2023-09-08T16:29 (+252)

In March of this year, 30,000 people, including leading AI figures like Yoshua Bengio and Stuart Russell, signed a letter calling on AI labs to pause the training of AI systems. While it seems unlikely that this letter will succeed in pausing the development of AI, it did draw substantial attention to slowing AI as a strategy for reducing existential risk.

While initial work has been done on this topic (this sequence links to some relevant work), many areas of uncertainty remain. I’ve asked a group of participants to discuss and debate various aspects of the value of advocating for a pause on the development of AI on the EA Forum, in a format loosely inspired by Cato Unbound.

Responses from Forum users are encouraged; you can share your own posts on this topic or comment on the posts from participants. You’ll be able to find the posts by looking at this tag (remember that you can subscribe to tags to be notified of new posts). 

I think it is unlikely that this debate will result in a consensus agreement, but I hope that it will clarify the space of policy options, why those options may be beneficial or harmful, and what future work is needed.

People who have agreed to participate

These are in random order, and they’re participating as individuals, not representing any institution:

  1. David Manheim (ALTER)
  2. Matthew Barnett (Epoch AI)
  3. Zach Stein-Perlman (AI Impacts)
  4. Holly Elmore (AI pause advocate)
  5. Buck Shlegeris (Redwood Research)
  6. Anonymous researcher (Major AI lab)
  7. Anonymous professor (Anonymous University)
  8. Rob Bensinger (Machine Intelligence Research Institute)
  9. Nora Belrose (EleutherAI)
  10. Thomas Larsen (Center for AI Policy)
  11. Quintin Pope (Oregon State University)

Scott Alexander will be writing a summary/conclusion of the debate at the end.

Thanks to Lizka Vaintrob, JP Addison, and Jessica McCurdy for help organizing this, and Lizka (+ Midjourney) for the picture.


jackva @ 2023-09-17T15:13 (+36)

Will someone write about the symbolic importance of the ask for a pause?

Right now, most of what has been written here on this seems focused on the techno-economics as if a pause were only about slowing down a technical process.

Asking for a pause is also a signal that one is extremely serious about the risk (and by the same token, not asking for a pause but speaking about existential risks seems very hard to communicate to a broader public).

NickLaing @ 2023-09-17T15:57 (+12)

I completely agree, these posts focus only on whether a pause would be good or not, and not ml on whether a campaign for a pause, or a similar campaign with a different purpose could be positive EV considering all outcomes of the campaign

jackva @ 2023-09-18T11:46 (+6)

I think Holly Elmore will probably address this, at least she did in her Filan podcast appearance: https://sites.libsyn.com/438081/12-holly-elmore-on-ai-pause.

Holly_Elmore @ 2023-09-19T00:59 (+15)

Yes, my piece will address this and why Pause advocacy can work.

Otto @ 2023-09-15T23:32 (+23)

It's definitely good to think about whether a pause is a good idea. Together with Joep from PauseAI, I wrote down my thoughts on the topic here.

Since then, I have been thinking a bit on the pause and comparing it to a more frequently mentioned option, namely to apply model evaluations (evals) to see how dangerous a model is after training.

I think the difference between the supposedly more reasonable approach of evals and the supposedly more radical approach of a pause is actually smaller than it seems. Evals aim to detect dangerous capabilities. What will need to happen when those evals find that, indeed, a model has developed such capabilities? Then we'll need to implement a pause. Evals or a pause is mostly a choice about timing, not a fundamentally different approach.

With evals, however, we'll move precisely to the brink, look straight into the abyss, and then we plan to halt at the last possible moment. Unfortunately, though, we're in thick mist and we can't see the abyss (this is true even when we apply evals, since we don't know which capabilities will prove existentially dangerous, and since an existential event may already occur before running the evals).

And even if we would know where to halt: we'll need to make sure that the leading labs will practically succeed in pausing themselves (there may be thousands of people working there), that the models aren't getting leaked, that we'll implement the policy that's needed, that we'll sign international agreements, and that we gain support from the general public. This is all difficult work that will realistically take time.

Pausing isn't as simple as pressing a button, it's a social process. No one knowns how long that process of getting everyone on the same page will take, but it could be quite a while. Is it wise to start that process at the last possible moment, namely when the evals turn red? I don't think so. The sooner we start, the higher our chance of survival.

Also, there's a separate point that I think is not sufficiently addressed yet: we don't know how to implement a pause beyond a few years duration. If hardware and algorithms improve, frontier models could democratize. While I believe this problem can be solved by international (peaceful) regulation, I also think this will be hard and we will need good plans (hardware or data regulation proposals) for how to do this in advance. We currently don't have these, so I think working on them should be a much higher priority.

Rafael Harth @ 2023-09-16T11:00 (+5)

My gut reaction is that the eval path is strongly inferior because it relies on a lot more conjunction. People need to still care about it when models get dangerous, it needs to still be relevant when they get dangerous, and the evaluations need to work at all. Compared to that, a pause seems like a more straight-forward good thing, even if it doesn't solve the problem.

Lukas_Gloor @ 2023-09-19T10:37 (+8)

I agree that immediate pause or at least a slowdown ("moving bright line of a training compute cap") is better/safer than a strategy that says "continue until evals find something dangerous, then hit the brakes everywhere."

I also have some reservations to evals in the sense that I think they can easily make things worse if they're implemented poorly (see my note here).

That said, evals could complement the pause strategy. For instance:

(1) The threshold for evals to trigger further slowing could be low. If the evals have to unearth even just rudimentary deception attempts rather than something already fairly dangerous, it may not be too late when they trigger. (2) Evals could be used in combination with a pause (or slowdown) to greenlight new research. For instance, maybe select labs are allowed to go over the training compute cap if they fulfill a bunch of strict safety and safety-culture requirements and if they use the training budget increase for alignment experiments and have evals set up to show that previous models of the same kind are behaving well in all relevant respects.

So, my point is we shouldn't look at this as "evals as an idea are inherently in tension with pausing ASAP."

tommcgrath @ 2023-09-16T21:13 (+3)

There's an important difference between pausing and evals: evals gets you loads of additional information. We can look at the results of the evals, discuss them and determine in what ways a model might have misuse potential (and thus try to mitigate it) or if the model is simply undeployable. If we're still unsure, we can gather more data and additionally refine our ability to perform and interpret evals.

If we (i.e. the ML community) repeatedly do this we build up a better picture of where our current capabilities lie, how evals relate to real-world impact and so on. I think this makes evals much better, and the effect will compound over time. Evals also produce concrete data that can convince skeptics (such as me - I am currently pretty skeptical of much regulation but can easily imagine eval results that would convince me). To stick with your analogy, each time we do evals we thin out the fog a bit, with the intention of clearing it before we reach the edge, as well as improving our ability to stop.

Holly_Elmore @ 2023-09-19T02:14 (+10)

To stick with your analogy, each time we do evals we thin out the fog a bit, with the intention of clearing it before we reach the edge, as well as improving our ability to stop.

How does doing evals improve your ability to stop? What concrete actions will you take when an eval shows a dangerous result? Do none of them overlap with pausing?

Lukas_Gloor @ 2023-09-19T10:47 (+7)

Evals showing dangerous capabilities (such as how to build a nuclear weapon) can be used to convince lawmakers that this stuff is real and imminent.

Of course, you don't need that if lawmakers already agree with you – in that case, it's strictly best to not tinker with anything dangerous.

But assuming that many lawmakers will remain skeptical, one function of evals could be "drawing out an AI warning shot, making it happen in a contained and controlled environment where there's no damage."

Of course, we wouldn't want evals teams to come up with AI capability improvements, so evals shouldn't become dangerous AI gain-of-function research. Still, it's a spectrum because even just clever prompting or small tricks can sometimes unearth hidden capabilities that the model had to begin with, and that's the sort of thing that evals should warn us about.

Chris Leong @ 2023-09-09T02:55 (+23)

I'm really happy to see this happening.

In fact, I'd like to see more things along these kinds of lines.

While there's a lot of good discussion on this forum, we aren't always going to end up discussing the most important topics organically. So I think it's often helpful for CEA/the mods to occasionally direct the attention of the forum towards the discussion topics that will move us forward as a community.

If we wanted to go beyond this, then I think it would be quite valuable to find two people with opposite views to work together in order to produce a high-quality distillation of any such debates.

ChanaMessinger @ 2023-09-09T00:24 (+19)

Thanks for noticing something you thought should happen (or having it flagged to you) and making it happen!

jacquesthibs @ 2023-09-08T17:10 (+9)

Love this idea, thanks for organizing this.

Zach Stein-Perlman @ 2023-09-16T02:00 (+8)

PSA: the term "compute overhang" or "hardware overhang" has been used in many ways. Today it seems to most often (but far from always) mean amount labs can quickly scale up the size of the largest training run (especially because a ban on large training runs ends). When you see it or use it, make sure everyone knows what it means.

(It will come up often in this debate.)

Zach Stein-Perlman @ 2023-09-16T02:20 (+8)

PSA: if "pause" is not defined but seems to refer to a specific kind of government policy, it most likely means policy regime that stops training runs using compute beyond a certain threshold.

Lukas_Gloor @ 2023-09-19T10:49 (+2)

Relatedly, there's something like a soft pause or slowdown where you slow training runs using compute beyond a certain threshold, but the threshold is moving every year. This could be a pragmatic tweak because compute will likely get cheaper, so it becomes easier for rogue actors to circumvent the compute cap if it never moves. This soft pause idea has been referred to as "moving bright line (of a compute cap)." 

Zach Stein-Perlman @ 2023-09-16T03:50 (+6)

PSA: use "FLOP" for compute and "FLOP/s" for compute per second. Avoid "FLOPS" and "FLOPs."

Will Aldred @ 2023-09-16T13:34 (+4)

(Adding to this: "FLOP" is the plural of "FLOP".)

tommcgrath @ 2023-09-17T13:36 (+1)

I’m trying to make “FLOPstacles” happen for things that mean we can’t just take max FLOP per GPU and multiply by number of GPUs, e.g. mem or interconnect bandwidth.

igor_krawczuk @ 2023-09-08T18:42 (+8)

Any thoughts on using https://www.kialo.com/ or a similar tool specailized on debate?

Nathan Young @ 2023-09-11T14:45 (+7)

I used such tools for a while and didn't feel much connection to them. I guess it often felt like there was no way to quantify arguments. It's not about the number but the value of the arguments before and against a point.

Harrison Durland @ 2023-09-09T02:55 (+5)

I wish! I’ve been recommending this for a while but nobody bites, and usually (always?) without explanation. I often don’t take seriously many of these attempts at “debate series” if they’re not going to address some of the basic failure modes that competitive debate addresses, e.g., recording notes in a legible/explorable way to avoid the problem of arguments getting lost under layers of argument branches.

David Mears @ 2023-09-10T13:34 (+2)

What do such tools offer?

James Herbert @ 2023-09-15T14:32 (+6)

Great initiative! 

Perhaps it'd be good to ask people who are doing some public-facing campaigning to contribute? For example, the team at the Existential Risk Observatory or those behind PauseAI. I might be wrong, but I don't think anyone on the list of agreed contributors represents that specific theory of change. 

I think a public-facing campaign is important to think about if we want to reduce the likelihood of articles such as 'How Silicon Valley doomers are shaping Rishi Sunak’s AI plans' being written.

Ben_West @ 2023-09-15T17:26 (+18)

Thanks for the suggestion! I expect that @Holly_Elmore will represent that viewpoint. See e.g. this podcast.

Holly_Elmore @ 2023-09-15T22:17 (+5)

Yes! My piece is about advocacy and Pause.

James Herbert @ 2023-09-18T08:30 (+2)

Oh I wasn't aware, thanks for correcting me! 

Larks @ 2023-09-09T02:59 (+6)

Great idea, thanks for organizing!

willrinehart @ 2023-10-06T16:17 (+5)

I'm a long-time lurker but I registered to make this comment: While these posts are incredibly high quality, they are legally naive. There is no mention of the First Amendment or Bernstein v. Department of Justice, which is a significant gap. Yes, let's have the discussion about pausing AI and part of that should include its legality. 

Ben_West @ 2023-10-08T22:02 (+7)

I would be excited for you to write a post about that!

willrinehart @ 2023-10-16T18:01 (+1)

Writing a post about it now. I'll probably crosspost to my Substack as well. 

Geoffrey Miller @ 2023-09-22T17:28 (+5)

This is a great idea, and I look forward to reading the diverse views on the wisdom of an AI pause.

I do hope that the authors contributing to this discussion take seriously the idea that an 'AI pause' doesn't need to be fully formalizable at a political, legal, or regulatory level. Rather, its main power can come from promoting an informal social consensus about the serious risks of AGI development, among the general public, journalists, politicians, and the more responsible people in the AI industry. 

In other words, the 'Pause AI' campaign might get most of its actual power and influence from helping to morally stigmatize reckless AI development, as I argued here.  

Thus, the people who argue that pausing AI isn't feasible, or realistic, or legal, or practical, may be missing the point. 'Pause AI' can function as a Schelling point, or focal point, or coordination mechanism, or whatever you want to call it, with respect to public discourse about the ethics of AI development.

Arturo Macias @ 2023-09-17T12:40 (+4)

I include links to my two old posts arguing for keeping AI development:

First I argued that AI is a necesary (almost irrepleaceable) tool to deal with the other existential risks (mainly nuclear war):

https://forum.effectivealtruism.org/posts/6j6qgNa3uGmzJEMoN/artificial-intelligence-as-exit-strategy-from-the-age-of

Then, that currently AI risk is simply "too low to be measured", and we need to be closer to AGI to develop realistic alignment work:

https://forum.effectivealtruism.org/posts/uHeeE5d96TKowTzjA/world-and-mind-in-artificial-intelligence-arguments-against

JWS @ 2023-09-08T17:39 (+4)

I think the idea of debates like this is great, good work team!

Is there any plan to expand this to non-AI cause areas?

Ben_West @ 2023-09-09T16:14 (+12)

Thanks! I don't have concrete plans, and it seems like I might get replaced soon. But I would be interested in seeing a list of other topics you would be excited for people to organize content around.

As always with these Forum events though I would like to reiterate that it really isn't that hard to email a few people and ask them if they're willing to write something about a given topic, and I would be excited for others to organize a similar things, since I'm skeptical that I will cover all the useful topics. People should feel free to DM me if they are interested in doing so and have questions about how to go about it!

Erich_Grunewald @ 2023-09-19T21:59 (+4)

I would be excited to see a debate series on the meat eater problem. It is weird to me that there's not more discussion around this in EA, since it (a) seems far from settled, and (b) could plausibly imply that one of the core strands of EA -- global health and development -- is ultimately net negative.

Chris Leong @ 2023-09-11T06:31 (+2)

So the final post will be released on September 27th?

Ben_West @ 2023-09-12T22:43 (+4)

If everything goes according to schedule, the final debate post would be the 24th, and then it will take some undetermined amount of time after that for Scott to write his summary/conclusion. I wouldn't be that surprised if things fell behind schedule though.