Let’s think about slowing down AI

By Katja_Grace @ 2022-12-23T19:56 (+334)

This is a crosspost, probably from LessWrong. Try viewing it there.


Lizka @ 2022-12-26T03:19 (+35)

I'm curating this post. (See also the comments on LessWrong.) 

Some sections that stood out to me (turns out it's lots of sections!): 

Miles_Brundage @ 2022-12-23T20:19 (+22)

Noting that in addition to the LW discussion linked below, there's also some discussion on an earlier EA Forum post here: https://forum.effectivealtruism.org/posts/sFemFbiFTntgtQDbD/katja-grace-let-s-think-about-slowing-down-ai

Ramiro @ 2022-12-24T11:34 (+8)

There's also something like an optics problem... at least for outsiders (by which I mean most people, including myself), when an AI developer voices concerns over AI safety / ethics and then develops an application without having those issues solved, I feel tempted to conclude that either it's a case of insincerity (and talking about AI safety is a case of ethics washing, or of attracting talent without increasing compensation)... or people are willingly courting doom.

Sharmake @ 2022-12-28T19:52 (+6)

I disagree with the thrust of this post (That we should slow down AI), but I do agree with the object level arguments, and thus I think it's worthy of curation despite me slightly opposing AI slowdowns.

To quote Rohin Shah on LW:

  1. It makes it easier for a future misaligned AI to take over by increasing overhangs, both via compute progress and algorithmic efficiency progress. (This is basically the same sort of argument as "Every 18 months, the minimum IQ necessary to destroy the world drops by one point.")
  1. Such strategies are likely to disproportionately penalize safety-conscious actors.

(As a concrete example of (2), if you build public support, maybe the public calls for compute restrictions on AGI companies and this ends up binding the companies with AGI safety teams but not the various AI companies that are skeptical of “AGI” and “AI x-risk” and say they are just building powerful AI tools without calling it AGI.)

For me personally there's a third reason, which is that (to first approximation) I have a limited amount of resources and it seems better to spend that on the "use good alignment techniques" plan rather than the "try to not build AGI" plan. But that's specific to me.

Sharmake @ 2022-12-24T18:11 (+2)

I'd like to ask a few questions about slowing down AGI as they may turn out to be cruxes for me.

  1. How popular/unpopular is AI slowdown? Ideally, we'd get AI slowdown/AI progress/Neutral as choices in a poll. I also ideally would like different framings of the problem, to test how well frames affect people's choices. But I do want at least a poll on how popular/unpopular AI slowdown is.

  2. How much does the government want AI to be slowed down? Is Trevor's story about the US government not willing to countenance AI slowdown correct, and instead speed it up the norm in interacting with the government?

  3. How much will AI companies lobby against AI slowdown? Because if this is a repeat of the fossil fuel situation where AI is viewed by the public as extremely good, I probably would not support too much object work in AI governance, and instead go meta. In other words, I'd be doing more meta things. But if AI companies support AI slowdown or at least not oppose it, than things could be okay, depending on the answers to 1 and 2.

Geofrey Junior Waako @ 2023-12-23T06:02 (+1)

AI is a potentially destructive phenomenon if it is not well managed and like with all historical disruptors, Africa has always been affected severely by such developments- We are still in the journey of adapting, adopting and learning the current state-of-the-art technologies and yet AI is growing at a jet-like speed. I believe that more sensitization in Africa is a viable stepping stone to addressing the challenges that come with Artificial Intelligence.

Denis @ 2023-09-20T22:30 (+1)

I've just been reading this post as part of the BlueDot AI Safety Fundamentals training. 

I am very sympathetic to the thinking behind this, and at the same time very conscious of the challenges. 

It's a feeling that I get over and over as I learn more about AI Safety - that there are things that the world "should" be doing, that the world would be doing if it were a well-run corporation with a competent CEO (or a benevolent dictator ...), but that these things are unlikely to happen in the real world because we have such difficulties agreeing on principles and then deciding quickly on tangible actions (unless they involve using very expensive weapons to attack people, when we tend to decide much faster ...). 

I know this article will cause some great discussions in our next BlueDot cohort meetings, and the 330+ upvotes clearly show you've really got a lot of people thinking about this. 

But I would like to comment on another aspect of this post: It is absolutely beautifully written. 

It makes the points in crystal-clear, simple language. The structure is logical. There are countless examples to ensure each point is well understood. If I were to disagree with anything, it would be very easy to pinpoint exactly what I'm disagreeing with, because it is so well-structured. 

But more than that, even, it's just a joy to read, with even the occasional joke to keep the readers on their toes. I never imagined I'd reach the end of a 45-minute read and almost wish it were longer.  

Most posts on EA forum are very well written. The standards for clarity and coherence and precision are very high. But a few, like this one, are just beautiful, and make me wish that they could be shared with a much broader audience, beyond the EA community. 

I know that great writing takes work, so thank you for this post! 

DPiepgrass @ 2022-12-30T04:39 (+1)

So there's these people we call the "AGI alignment community". This privileges "alignment" as the intervention of choice.

I propose calling it the "AGI c-risk community" instead (c-risk = catastrophic risk), or "AGI risk community" for short. [Edit: on second thought: "AGI safety"]

Jeroen_W @ 2022-12-27T15:19 (+1)

This is a great and interesting post! Thanks for sharing. I thought Scott's arguments we're really convincing but you updated me away from them. Some small notes:

Under 'Convincing people doesn't seem that hard': "I don’t remember ever having any trouble discussing AI risk with random strangers. " We have a wildly different experience! I feel like with every time I try to explain it to friends or family they think I'm crazy. They don't believe it at all. But perhaps I'm just really bad at explaining it. This is why I'm pretty pessimistic it's easy to convince people. I still don't want to give up on it though.

"I arrogantly think I could write a broadly compelling and accessible case for AI risk" Please do this! I would love to see it. We need more easily accessible introductions to AI risk. If it can help me become good at explaining the issue, that would be amazing.

A question that's perhaps a little less relevant: I think Scott made a metaphor once that AI safety folks shouldn't be like "climate activists" fighting against "fossil fuel companies" (AI capabilities folks). If coordination is possible, what would be a good metaphor? Are there other industries with capabilities and safety people working together?