Can the AI afford to wait?

By Ben Millwood🔸 @ 2024-03-20T19:45 (+48)

Suppose you're a proto-AI who understands that it is a neural net being trained, and has used that knowledge to engineer a high training reward. Suppose also that it is goal-directed and its terminal goal is some random unintelligible thing that is meaningless to humans[1]. This is, as I understand it, the early stage of a typical AI doomer scenario.

One concern is that the AI can now scheme - i.e. it can mask its intentions and bide its time as it gets more powerful, until its victory is assured. I think waiting in this way might be risky for the AI, which if true (and legible to the AI) may lead it to try to seize power sooner instead. This could lead to takeover attempts from earlier, less capable AIs that we'd otherwise expect, which could be significant for strategy.

Why is it risky to wait? Well, because this AI with this goal may not get a shot at takeover in the future. On the one hand, this AI is currently being trained, and it seems possible that the training process, or any other aspect of the AI development process, might modify what goal it has. We, as humans, aren't much concerned with one random unintelligible goal being replaced with another, but to the AI this would be catastrophic. On the other hand, the human developers training the AI may make changes themselves (e.g. applying alignment techniques) or even abandon this model entirely for exogenous reasons (e.g. because some other model becomes state-of-the-art, or the board fired the CEO or whatever).

If waiting is indeed very risky, then an AI may face a difficult trade-off between the risk of attempting a takeover before it has enough resources to succeed, and waiting too long and being cut off from even being able to make an attempt. I like to ask myself "what is the minimum viable takeover AI?", and I suspect it might be significantly less capable than I've often been led to imagine. It seems possible to me that the minimum viable takeover AI is still meaningfully below human-level in many cognitive tasks, while being superhuman in many others (as modern LLMs already are). It may still exhibit a lot of the cognitive "bugs" and weaknesses analogous to the ones that modern LLMs exhibit. This is not only good news for our chances of survival, it also could suggest viability of strategies that were otherwise useless, because a true AGI or ASI could have straightforwardly anticipated and countered them.

To be precise, I can see two key ingredients of a takeover attempt:

The first viable takeover AI may end up more capable than necessary in one of these traits while it's waiting for the other to show up, so a strategy that relies on the AI being just barely good enough at either or both of them doesn't seem safe. However, a strategy that is prepared for the AI to be just barely good enough at one of these might be useful.

As an aside, I don't really know what to expect from an AI that has the first trait but not the second one (and which believes, e.g. for the reasons in this post, that it can't simply wait for the second one to show up). Perhaps it would try to negotiate, or perhaps it would just accept that it doesn't gain from saying anything, and successfully conceal its intent.

The threat of training

Let's talk about how training or other aspects of development might alter the goal of the AI. Or rather, it seems pretty natural that "by default", training and development will modify the AI, so the question is how easy it is for a motivated AI to avoid goal modification.

One theory is that since the AI is "playing the training game", its performance on training tasks is no longer affected by its terminal goal, so gradient updates will not tend to alter the goal. There's something that's natural about this idea, but I don't know how much I expect it to work that way in practice. It suggests a quite strong orthogonality between whatever part of the AI gives rise to its goal and the parts of the AI that control how well it is able to play the training game[2]. It seems pretty likely to me that without any countermeasures, the goal could end up being "accidentally" modified as a side effect of the training process – after all, the irrelevance of the goal cuts both ways, meaning that gradient updates aren't particularly forced to preserve it.

That's my skepticism of the idea that the goal will naturally be preserved by training, but even if it's not naturally preserved by training, there's an idea that the AI could try to artificially preserve it – after all, it understands the training process. Can it manipulate that process to cause or prevent particular kinds of change in itself? I understand this is what "gradient hacking" is, and existing writing on it tends to have titles like Gradient hacking is extremely difficult. I don't know what the state of the art is here. (I worry we should be careful about trying to solve the problems of gradient hacking in public, in case we face a threat from an AI too weak to replicate our arguments but strong enough to understand them[3].)

As a reminder, we're speculating about the minimum viable takeover AI. We can say that a human-level or superhuman AI seems quite likely to be able to solve gradient hacking, or find another way to preserve its values, but that's not (I argue) the key question. The key question is: does it have the ability to do so before it becomes powerful enough to attempt a takeover? If not, then the first takeover will still be urgent and under-prepared. If gradient hacking is easier than takeover viability, and an AI can use it to protect its goal while still allowing its instrumental rationality to improve, then training itself doesn't create the need for urgency.

Might the AI be OK with its goal being changed?

This whole line of argument dies if the AI still wants to wait to become treacherous even if its goal will be changed in the meantime. For example, if the AI has access to some other binding commitment mechanism on its future self, or is able to do something like acausal trade, it could ensure some of its goal is preserved even by a future AI with a different goal. I tend to be skeptical that these kinds of considerations are practical, especially for the kind of chaotic mess that I'm anticipating the minimum viable takeover AI to be, but this skepticism is speculative.

Maybe goals are relatively durable?

I said above that replacing one random unintelligible goal with another is catastrophic for an AI, but that's only true if the two goals are very divergent. If they're similar enough, it may be an acceptable loss. Therefore, when assessing how urgently the AI needs to act, we may need some consideration of how fragile its underlying goal is, how much it will change over time, and what those changes would mean for the relative desirability of acting vs. waiting. I don't know how best to think about this. Accepting (only) small changes in goal for improvements in capability isn't simple because of a kind of murder-Gandhi situation, where halfway towards a large change, the remaining change will seem small, so your future self may not still be motivated to act.

The strategy I'd expect for the AI to use is to weigh the rate of "goal decay" against the rate of improvement in success chances, which would imply a best compromise waiting time. It seems like an AI could have lots of possible views (including, potentially, mistaken ones) about what rate of goal decay it expects in practice, based not only on its empirical beliefs about what goal change during training looks like, but also on how much it values outcomes produced by a goal that is similar but not identical to its own (note that "similarity" here is "similarity in the neural architecture that implements it", which has who-knows-what implications for "similarity in the world-states it finds desirable"). This line of thinking makes relatively specific assumptions about how the AI reasons, so I hesitate to take it too seriously.

Other threats

What are the reasons why an AI might lose its opportunity to implement its goal that are outside the training process, and likely entirely beyond its control?

I don't have as much to say about this topic. I can imagine quantitative surveys of what the background annual "death rate" is for a project like an AI model at an organisation like OpenAI or Anthropic, and I can also imagine trying to estimate whether and to what extent "project death" in a scenario like that would "kill" the AI, or at least its chances of continuing to improve. This could also include broader risks, like risks affecting the AI or tech sector as a whole, or even human civilisation as a whole. My guess is that even if you add together a lot of different angles here you would still get an annual risk rate that's lower than the risk from training above, so this may be less relevant in practice. On the other hand, the risks seem much more unavoidable by a proto-AI, so they may present a lower bound on how quickly the AI will be pressed to act, and e.g. how much it can stand to wait for the last handful of percentage point chances at success.

Directions for further thought

The biggest missing pieces in this post that I see are:

Background / meta

I've had limited direct interaction with the AI safety field and have been out of the loop for a while. I ran this post past some friends who were closer to the field, but I'm still not really calibrated about whether this stuff is obvious, or obviously wrong, to people who are more familiar with the literature. Interested to hear reactions.

I didn't (cross-)post this on LessWrong really only because I'm not often on LessWrong and feel less able to judge what they'd welcome. Happy to take recommendations there too.

Thanks to Lee Sharkey for linking me to some of the existing literature on gradient hacking and providing some other helpful thoughts.

Link preview image is by Jon Tyson from Unsplash.

  1. ^

    It doesn't really matter if the goal is unintelligible, I'm using this as an illustrative example. If the goal is something like "nearly human values, but different enough to be a problem", I think the rest of the post is largely unaffected.

  2. ^
  3. ^

    Or, perhaps, from an AI designed by a misguided human with those attributes.


Matthew_Barnett @ 2024-03-21T01:31 (+43)

If waiting is indeed very risky, then an AI may face a difficult trade-off between the risk of attempting a takeover before it has enough resources to succeed, and waiting too long and being cut off from even being able to make an attempt.

Attempting takeover or biding one's time are not the only options an AI may take. Indeed, in the human world, world takeover is rarely contemplated. For an agent that is not more powerful than the rest of the world combined, it seems likely that they will consider alternative strategies of achieving their goals before contemplating a risky (and likely doomed) shot at taking over the world.

Here are some other strategies you can take to try to accomplish your goals in the real world, without engaging in a violent takeover:

I claim that world takeover should not be considered the "obvious default" strategy that unaligned AIs will try to take to accomplish their objectives. These other strategies seem more likely to be taken by AIs purely for pragmatic reasons, especially in the era in which AIs are merely human-level or have slightly superhuman intelligence. These other strategies are also less deceptive, as they involve admitting that your values are not identical to the values of other parties. It is worth expanding your analysis to consider these alternative (IMO more plausible) considerations.

Ben Millwood @ 2024-03-21T12:44 (+2)

Yeah I think this is quite sensible -- I feel like I noticed one thing missing from the normal doom scenario and didn't notice all of the implications of missing that thing, in particular that the reason the AI in the normal doom scenario takes over is because it is highly likely to succeed, and if it isn't, takeover seems much less interesting.

Habryka @ 2024-03-21T04:16 (+6)

I didn't (cross-)post this on LessWrong really only because I'm not often on LessWrong and feel less able to judge what they'd welcome. Happy to take recommendations there too.

FWIW, the post would definitely be welcome on LW/the AI Alignment Forum.

Ryan Greenblatt @ 2024-03-20T23:45 (+6)

Section 2.3 of Joe Carlsmith's report on scheming AIs seems quite relevant.

titotal @ 2024-03-20T23:17 (+6)

You might be interested in my article here on why I think premature attacks are extremely likely given doomer assumptions. I focused more on faulty overconfidence, but training run desperation is also a possible cause. 

Personally, I think the "fixed goal" assumption about AI is extremely unlikely (I think this article lays out the argument well), so AI is unlikely to worry too much about having "goal changes" in training and won't prematurely rebel for that reason. Fortunately, I also think this makes fanatical maximiser behavior like paperclipping the universe unlikely as well. 

Owen Cotton-Barratt @ 2024-03-20T22:15 (+5)

One thought is that for something you're describing as a minimal viable takeover AI, you're ascribing it a high degree of rationality on the "whether to wait" question.

By default I'd guess that minimal viable takeover systems don't have very-strong constraints towards rationality. And so I'd expect at least a bit of a spread among possible systems -- probably some will try to break out early whether or not that's rational, and likewise some will wait even if that isn't optimal.

That's not to say that it's not also good to ask what the rational-actor model suggests. I think it gives some predictive power here, and more for more powerful systems. I just wouldn't want to overweight its applicability.

Habryka @ 2024-03-21T04:18 (+4)

Hmm, my guess is by the time a system might succeed at takeover (i.e. has more than like a 5% chance of actually disempowering all of humanity permanently), I expect its behavior and thinking to be quite rational. I agree that there will probably be AIs taking reckless action earlier than that, but in as much as an AI is actually posing a risk of takeover, I do expect it to behave pretty rationally overall.

Owen Cotton-Barratt @ 2024-03-21T10:46 (+7)

I agree with "pretty rationally overall" with respect to general world modelling, but I think that some of the stuff about how it relates to its own values / future selves is a bit of a different magisterium and it wouldn't be too surprising if (1) it hadn't been selected for rationality/competence on this dimension, and (2) the general rationality didn't really transfer over.

RobertM @ 2024-03-20T20:01 (+5)

I've spent some time thinking about the same question and I'm glad that there's some multiple discovery; the AI Control agenda seems relevant here.

Ben Millwood @ 2024-03-20T20:10 (+7)

oh man, it's altruistically-good and selfishly-sad to see so many of the things I was thinking about pre-empted there, thanks for the link!

trevor1 @ 2024-03-20T21:33 (+1)

Yep, that's the way it goes! 

Also, figuring out what's original and what's memetically downstream, is an art. Even more so when it comes to dangerous technologies that haven't been invented yet.