AGI Timelines in Governance: Different Strategies for Different Timeframes
By simeon_c @ 2022-12-19T21:31 (+110)
This is a crosspost, probably from LessWrong. Try viewing it there.
nullalexrjl @ 2022-12-20T06:52 (+22)
whether you have a 5-10 year timeline or a 15-20 year timeline
Something that I'd like this post to address that it doesn't is that to have "a timeline" rather than a distribution seems ~indefensible given the amount of uncertainty involved. People quote medians (or modes, and it's not clear to me that they reliability differentiate between these) ostensibly as a shorthand for their entire distribution, but then discussion proceeds based only on the point estimates.
I think a shift of 2 years in the median of your distribution looks like a shift of only a few % in your P(AGI by 20XX) numbers for all 20XX, and that means discussion of what people who "have different timelines" should do is usually better framed as "what strategies will turn out to have been helpful if AGI arrives in 2030".
While this doesn't make discussion like this post useless, I don't think this is a minor nitpick. I'm extremely worried by "plays for variance", some of which are briefly mentioned above (though far from the worst I've heard). I think these tend to look good only on worldviews which are extremely overconfident, and treat timelines as point estimates/extremely sharp peaks). More balanced views, even those with a median much sooner than mine, should typically realise that the EV gained in the worlds where things move quickly is not worth the expected cost in worlds where they don't. This is in addition to the usual points about co-operative behaviour when uncertain about the state of the world, adverse selection, the unilateralist's curse etc.
simeon_c @ 2022-12-20T10:01 (+5)
Thanks for your comment!
That's an important point that you're bringing up.
My sense is that at the movement level, the consideration you bring up is super important. Indeed, even though I have fairly short timelines, I would like funders to hedge for long timelines (e.g. fund stuff for China AI Safety). Thus I think that big actors should have in mind their full distribution to optimize their resource allocation.
That said, despite that, I have two disagreements:
- I feel like at the individual level (i.e. people working in governance for instance, or even organizations), it's too expensive to optimize over a distribution and thus you should probably optimize with a strategy of "I want to have solved my part of the problem by 20XX". And for that purpose, identifying the main characteristics of the strategic landscape at that point (which this post is trying to do) is useful.
- "the EV gained in the worlds where things move quickly is not worth the expected cost in worlds where they don't." I disagree with this statement, even at the movement level. For instance I think that the trade-off of "should we fund this project which is not the ideal one but still quite good?" is one that funders often encounter and I would expect that funders have more risk adverseness than necessary because when you're not highly time-constrained, it's probably the best strategy (i.e. in every fields except in AI safety, it's probably a way better strategy to trade-off a couple of years against better founders).
Finally, I agree that "the best strategies will have more variance" is not a good advice for everyone. The reason I decided to write it rather than not is because I think that the AI governance community tends to have a too high degree of risk adverseness (which is a good feature in their daily job) which penalizes mechanically a decent amount of actions that are way more useful under shorter timelines.
NickLaing @ 2022-12-20T17:58 (+1)
There's a balance here for communication purposes - concrete potential timelines are easier for some of us to understand than distributions. Perhaps both could be used?
HaydnBelfield @ 2022-12-20T12:56 (+12)
This is a really useful and interesting post that I'm glad you've written! I agree with a lot of it, but I'll mention one bit I'm less sure about.
I think we can have more nuance about governments "being in the race" or their "policy having strong effects". I agree that pre-2030, a large, centralised, government-run development programme like the Apollo Project is less likely (I assume this is the central thing you have in your mind). However there are other ways governments could be involved, including funding, regulating and 'directing' development and deployment.
I think cyber weapons and cyber defence is a useful comparison. Much of the development - and even deployment - is led by the private sector: defence contractors in the US, criminals in some other states. Nevertheless, much of it is funded, regulated and directed by states. People didn't think this would happen in the late 1990s and 2000s - they thought it would be private sector led. But nevertheless with cyber, we're now in a situation where the major states (e.g. those in the P5, with big economies, militaries and nuclear weapons) have the preponderance of cyber power - they have directed and are responsible for all the largest cyber attacks (Stuxnet, 2016 espionage, NotPetya, WannaCry etc). It's a public-private partnership, but states are in the driving seat.
Something similar might happen with AI this side of 2030, without the situation resembling the Apollo Project.
For much more on this, Jade Leung's thesis is great: Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies
simeon_c @ 2022-12-20T14:46 (+6)
Thanks for your comment!
A couple of remarks:
- Regulations that cut X-risks are strong regulations: My sense is that regulations that really cut X-risks at least a bit are pretty "strong", i.e. in the reference class of "Constrain labs to airgap and box their SOTA models while they train them" or "Any model which is trained must be trained following these rules/applying these tests". So what matters in terms of regulation is "will governments take such actions?" and my best guess is no, at least not without the public opinion caring a lot about that. Do you have in mind an example of regulation which is a) useful and b) softer than that?
- Additional state funding would be bad by default: I think that the default of "more funding goes towards AGI" is that it accelerates capabilities more (e.g backing some labs so that they move faster than China, things like that). Which is why I'm not super excited about increasing the amount of funding governments put into AGI. But to the extent that there WILL be some funding, then it's nice to steer it towards safety research.
And finally, I like the example you gave on cyber. The point I was making was something like "Your theories of change for pre-2030 timelines shouldn't rely too much on national government policy" and my understanding of what you're saying is something like "that may be right, but national governments are still likely to have a lot of (bad by default) influence, so we should care about them".
I basically had in mind this kind of scenario where states don't do the research themselves but are backing some private labs to accelerate their own capabilities, and it makes me more worried about encouraging states to think about AGI. But I don't put that much weight on these scenarios yet.
How confident are you that governments will get involved in meaningful private-public collaboration around AGI by 2030? A way of operationalizing that could be "A national government spends more than a billion $ in a single year on a collaboration with a lab with the goal to accelerate research on AGI".
If you believe that it's >50%, that would definitely update me towards "we should still invest a significant share of our resources in national policy, at least in the UK and the US so that they don't do really bad moves".
HaydnBelfield @ 2022-12-20T17:33 (+7)
I think my point is more like "if anyone gets anywhere near advanced AI, governments will have something to say about it - they will be a central player in shaping its development and deployment." It seems very unlikely to me that governments would not notice or do anything about such a potentially transformative technology. It seems very unlikely to me that a company could train and deploy an advanced AI system of the kind you're thinking about without governments regulating and directing it. On funding specifically, I would probably be >50% on governments getting involved in meaningful private-public collaboration if we get closer to substantial leaps in capabilities (though it seems unlikely to me that AI progress will get to that point by 2030).
On your regulation question, I'd note that the EU AI Act, likely to pass next year, already proposes the following requirements applying to companies providing (eg selling, licensing or selling access to) 'general purpose AI systems' (eg large foundation models):
- Risk Management System
- Data and data governance
- Technical documentation
- Record-keeping
- Transparency and provision of information to users
- Human oversight
- Accuracy, robustness and cybersecurity
So they'll already have to do (post-training) safety testing before deployment. Regulating the training of these models is different and harder, but even that seems plausible to me at some point, if the training runs become ever huger and potentially more consequential. Consider the analogy that we regulate biological experiments.
simeon_c @ 2022-12-20T20:08 (+5)
I think that our disagreement comes from what we mean by "regulating and directing it."
My rough model of what usually happens in national governments (and not the EU, which is a lot more independent from its citizen than the typical national government) is that there are two scenarios:
- Scenario 1 in which national governments regulate or do things on something nobody is caring about (in particular, not the media). That gives birth to a lot of degrees of freedom and the possibility of doing fairly ambitious things (cf Secret Congress)
- Scenario 2 in which national governments regulate things that many people care about and brings attention and then nothing gets done, most measures are fairly weak etc. In this scenario my rough model is that national governments do the smallest thing that satisfy their electorate + key stakeholders.
I feel like we're extremely likely to be in scenario 2 regarding AI. And thus that no significant measure will be taken, which is why I put the emphasis of "no strong [positive] effect" on AI safety. So basically I feel like the best you can probably do in national policy is something like "avoid that they do bad things" (which is really good if it's a big risk) or "do mildly good things". But to me, it's quite unlikely that we go from a world where we die to a world where we don't die thanks to a theory of change which is focused on national policy.
The EU AI Act is a bit different in that as I said above, the EU is much less tied to the daily worries of citizen and thus is operating under less constraints. Thus I think that it's indeed plausible that the EU does something ambitious on GPAIS but I think that unfortunately it's unlikely that the US will replicate something locally and that the EU compliance mechanisms are not super likely to cut the worst risks for the UK and US companies.
Regulating the training of these models is different and harder, but even that seems plausible to me at some point
I think that it's plausible but not likely, and given that it would be the intervention that would cut the most risks, I tend to prefer corporate governance which seems significantly more tractable and neglected to me.
Out of curiosity, could you refer to a specific event you'd expect to see "if we get closer to substantial leaps in capabilities"? I think that it's a useful exercise to disagree fruitfully on timelines and I'd be happy to bet on some events if we disagree on one.
pete @ 2022-12-19T21:52 (+11)
It could be that I love this because it's what I'm working on (raising safety awareness in corporate governance) but what a great post. Well structured, great summary at the end.
Misha_Yagudin @ 2022-12-20T20:13 (+9)
I am quite confused about what probabilities here mean, especially with prescriptive sentences like "Build the AI safety community in China" and "Beware of large-scale coordination efforts."
I also disagree with the "vibes" of probability assignment to a bunch of these, and the lack of clarity on what these probabilities entail makes it hard to verbalize these.
simeon_c @ 2022-12-20T20:24 (+1)
Hey Misha! Thanks for the comment!
I am quite confused about what probabilities here mean, especially with prescriptive sentences like "Build the AI safety community in China" and "Beware of large-scale coordination efforts."
As I wrote in note 2, I'm here claiming that this claim is more likely to be true under these timelines than the other timelines. But how could I make it clearer without bothering too much? Maybe putting note 2 under the table in italic?
I also disagree with the "vibes" of probability assignment to a bunch of these, and the lack of clarity on what these probabilities entail makes it hard to verbalize these.
I see, I hesitated in the trade-off (1) "put no probabilities" vs (2) "put vague probabilities" because I feel like that the second gives a lot more signal on how confident I am in what I say and allow people to more fruitfully disagree but at the same time it gives a "seriousness" signal which is not good when the predictions are not actual predictions.
Do you think that putting no probabilities would have been better?
By "I also disagree with the vibes of probability assignment to a bunch of these", do you mean that it seems over/underconfident in a bunch of ways when you try to do a similar exercise?
Misha_Yagudin @ 2022-12-20T22:21 (+7)
Well, yeah, I struggle with interpreting that:
- Prescriptive statements have no truth value — hence I have trouble understanding how they might be more likely to be true.
- Comparing "what's more likely to be true" is also confusing as, naively, you are comparing two probabilities (your best guesses) of X being true conditional on "T " and "not T;" and one is normally very confident in their arithmetic abilities.
- There are less naive ways of interpreting that would make sense, but they should be specified.
- Lastly and probably most importantly, a "probability of being more likely under condition" is not illuminating (in these cases, e.g., how much larger expected returns to community building is actually an interesting one).
Sorry for the lack of clarity: I meant that despite my inability to interpret probabilities, I could sense their vibes, and I hold different vibes. And disagreeing with vibes is kinda difficult because you are unsure if you are interpreting them correctly. Typical forecasting questions aim to specify the question and produce probabilities to make underlying vibes more tangible and concrete — maybe allowing to have a more productive discussion. I am generally very sympathetic to the use of these as appropriate.
NickLaing @ 2022-12-20T17:56 (+6)
Thanks so much for this! I'm a short-term global development kind of guy, but this post was so well written and made so much sense it got me interested in this AGI stuff. Mad respect for your ability to clearly communicate quite complicated topics - you have a future in writing.
simeon_c @ 2022-12-20T20:12 (+6)
Ah ah you probably don't realize it but "you" is actually 4 persons: Amber Dawn for the first draft of the post, me (Simeon) for the ideas, the table and the structure of the post, and me, Nicole Nohemi & Felicity Riddel for the partial rewriting of the draft to make it clearer.
So the credits are highly distributed! And thanks a lot, it's great to hear that!
Karl von Wendt @ 2022-12-20T10:33 (+6)
I strongly disagree with "Avoid publicizing AGI risk among the general public" (disclaimer: I'm a science fiction novelist about to publish a novel about AGI risk, so I may be heavily biased). Putin said in 2017 that "the nation that leads in AI will be the ruler of the world". If anyone who could play any role at all in developing AGI (or uncontrollable AI as I prefer to call it) isn't trying to develop it by now, I doubt very much that any amount of public communication will change that.
On the other hand, I believe our best chance of preventing or at least slowing down the development of uncontrollable AI is a common, clear understanding of the dangers, especially among those who are at the forefront of development. To achieve that, a large amount of communication will be necessary, both within development and scientific communities and in the public.
I see various reasons for that. One is the availability heuristic: People don't believe there is an AI x-risk because they've never seen it happen outside of science fiction movies and nobody but a few weird people in the AI safety community is talking seriously about it (very similar to climate change a few decades ago). Another reason is social acceptance: As long as everyone thinks AI is great and the nation with the most AI capabilities wins, if you're working on AI capabilities, you're a hero. On the other hand, if most people think that strong AI poses a significant risk to their future and that of their kids, this might change how AI capabilities researchers are seen, and how they see themselves. I'm not suggesting disparaging people working at AI labs, but I think working in AI safety should be seen as "cool", while blindly throwing more and more data and compute at a problem and see what happens should be regarded as "uncool".
simeon_c @ 2022-12-20T11:08 (+16)
Thanks for your comment!
First, you have to have in mind that when people are talking about "AI" in industry and policymaking, they usually have mostly non-deep learning or vision deep learning techniques in mind simply because they mostly don't know the ML academic field but they have heard that "AI" was becoming important in industry. So this sentence is little evidence that Russia (or any other country) is trying to build AGI, and I'm at ~60% Putin wasn't thinking about AGI when he said that.
If anyone who could play any role at all in developing AGI (or uncontrollable AI as I prefer to call it) isn't trying to develop it by now, I doubt very much that any amount of public communication will change that.
I think that you're deeply wrong about this. Policymakers and people in industry, at least till ChatGPT had no idea what was going on (e.g at the AI World Summit, 2 months ago very few people even knew about GPT-3). SOTA large language models are not really properly deployed, so nobody cared about them or even knew about them (till ChatGPT at least). The level of investment right now in top training runs probably doesn't go beyond $200M. The GDP of the US is 20 trillion. Likewise for China. Even a country like France could unilaterally put $50 billion in AGI development and accelerate timelines quite a lot within a couple of years.
Even post ChatGPT, people are very bad at projecting what it means for next years and still have a prior on the fact that human intelligence is very specific and can't be beaten which prevents them from realizing all the power of this technology.
I really strongly encourage you to go talk to actual people from industry and policy to get a sense of their knowledge on the topic. And I would strongly recommend not publishing your book as long as you haven't done that. I also hope that a lot of people who have thought about these issues have proofread your book because it's the kind of thing that could really increase P(doom) substantially.
I think that to make your point, it would be easier to defend the line that "even if more governments got involved, that wouldn't change much". I don't think that's right because if you gave $10B more to some labs, it's likely they'd move way faster. But I think that it's less clear.
a common, clear understanding of the dangers
I agree that it would be something good to have. But the question is: is it even possible to have such a thing?
I think that within the scientific community, it's roughly possible (but then your book/outreach medium must be highly targeted towards that community). Within the general public, I think that it's ~impossible. Climate change, which is a problem which is much easier to understand and explain is already way too complex for the general public to have a good idea of what are the risks and what are the promising solutions to these risks (e.g. a lot people's top priorities is to eat organic food, recycle and decrease plastic consumption).
I agree that communicating with the scientific community is good, which is why I said that you should avoid publicizing only among "the general public". If you really want to publish a book, I'd recommend targeting the scientific community, which is not at all the same public as the general public.
"On the other hand, if most people think that strong AI poses a significant risk to their future and that of their kids, this might change how AI capabilities researchers are seen, and how they see themselves"
I agree with this theory of change and I think that it points a lot more towards "communicate in the ML community" than "communicate towards the general public". Publishing great AI capabilities is mostly cool for other AI researchers and not that much for the general public. People in San Francisco (where most of the AGI labs are) also don't care much about the general public and whatever it thinks ; the subculture there and what is considered to be "cool" is really different from what the general public thinks is cool. As a consequence, I think they mostly care about what their peers are thinking about them. So if you want to change the incentives, I'd recommend focusing your efforts on the scientific & the tech community.
HaydnBelfield @ 2022-12-20T13:03 (+14)
Strongly agree, upvoted.
Just a minor point on the Putin quote, as it comes up so often, he was talking to a bunch of schoolkids, encouraging them to do science and technology. He said similarly supportive things about a bunch of other technologies. I'm at >90% he wasn't referring to AGI. He's not even that committed to AI leadership: he's taken few actions indicating serious interest in 'leading in AI'. Indeed, his Ukraine invasion has cut off most of his chip supplies and led to a huge exodus of AI/CS talent. It was just an off-the-cuff rhetorical remark.
simeon_c @ 2022-12-20T14:18 (+1)
Oh that's fun, thanks for the context!
Karl von Wendt @ 2022-12-22T11:11 (+1)
Policymakers and people in industry, at least till ChatGPT had no idea what was going on (e.g at the AI World Summit, 2 months ago very few people even knew about GPT-3). SOTA large language models are not really properly deployed, so nobody cared about them or even knew about them (till ChatGPT at least).
As you point out yourself, what makes people interested in developing AGI is progress in AI, not the public discussion of potential dangers. "Nobody cared about" LLMs is certainly not true - I'm pretty sure the relevant people watched them closely. That many people aren't concerned about AGI or doubting its feasibility by now only means that THOSE people will not pursue it, and any public discussion will probably not change their minds. There are others who think very differently, like the people at OpenAI, Deepmind, Google, and (I suspect) a lot of others who communicate less openly about what they do.
I agree that [a common understanding of the dangers] would be something good to have. But the question is: is it even possible to have such a thing?
I think that within the scientific community, it's roughly possible (but then your book/outreach medium must be highly targeted towards that community). Within the general public, I think that it's ~impossible.
I don't think you can easily separate the scientific community from the general public. Even scientific papers are read by journalists, who often publish about them in a simplified or distorted way. Already there are many alarming posts and articles out there, as well as books like Stuart Russell's "Human Compatible" (which I think is very good and helpful), so keeping the lid on the possibility of AGI and its profound impacts is way too late (it was probably too late already when Arthur C. Clarke wrote "2001 - A Space Odyssey"). Not talking about the dangers of uncontrollable AI for fear that this may lead to certain actors investing even more heavily in the field is both naive and counterproductive in my view.
And I would strongly recommend not publishing your book as long as you haven't done that.
I will definitely publish it, but I doubt very much that it will have a large impact. There are many other writers out there with a much larger audience who write similar books.
I also hope that a lot of people who have thought about these issues have proofread your book because it's the kind of thing that could really increase P(doom) substantially.
I'm currently in the process of translating it to English so I can do just that. I'll send you a link as soon as I'm finished. I'll also invite everyone else in the AI safety community (I'm probably going to post an invite on LessWrong).
Concerning the Putin quote, I don't think that Russia is at the forefront of development, but China certainly is. Xi has said similar things in public, and I doubt very much that we know how much they currently spend on training their AIs. The quotes are not relevant, though, I just mentioned them to make the point that there is already a lot of discussion about the enormous impact AI will have on our future. I really can't see how discussing the risks should be damaging, while discussing the great potential of AGI for humanity should not.
simeon_c @ 2022-12-27T23:53 (+3)
"Nobody cared about" LLMs is certainly not true - I'm pretty sure the relevant people watched them closely.
What do you mean by "the relevant people"? I would love that we talk about specifics here and operationalize what we mean. I'm pretty sure E. Macron haven't thought deeply about AGI (i.e has never thought for more than 1h about timelines) and I'm at 50% that if he had any deep understanding of what changes it will bring, he would already be racing. Likewise for Israel, which is a country which has strong track record of becoming leads in technologies that are crucial for defense.
That many people aren't concerned about AGI or doubting its feasibility by now only means that THOSE people will not pursue it, and any public discussion will probably not change their minds.
I think here you wrongly assume that people have even understood what are the implications of AGI and that they can't update at all once the first systems will start being deployed. The situation where what you say could be true is if you think that most of your arguments hold because of ChatGPT. I think it's quite plausible that since ChatGPT and probably even more in 2023 there will be deployments that may make mostly everyone that matter aware of AGI. I don't have a good sense yet of how policymakers have updated yet.
Already there are many alarming posts and articles out there, as well as books like Stuart Russell's "Human Compatible" (which I think is very good and helpful), so keeping the lid on the possibility of AGI and its profound impacts is way too late
Yeah, I realize thanks to this part that a lot of the debate should happen on specifics rather that at a high-level as we're doing here. Thus, chatting about your book in particular will be helpful for that.
I'm currently in the process of translating it to English so I can do just that. I'll send you a link as soon as I'm finished. I'll also invite everyone else in the AI safety community (I'm probably going to post an invite on LessWrong).
Great! Thanks for doing that!
while discussing the great potential of AGI for humanity should not.
FYI I don't think that it's true.
Regarding all our discussion, I realized I didn't mention a fairly important argument: a major failure mode specifically regarding risks is the following reaction from ~any country: "Omg, China is developing bad AGIs, so let's develop safe AGIs first!".
This can happen in two ways:
- Misuse as the mainline scenario that people are envisioning. Basically, if you're mostly concerned about misuse, racing to be the first to have the AGI makes sense. And because misuse is way easier to understand than accidental risk, I expect this to be ~the default.
- Overestimating one's competence. Even if you believed in AGI accidental X-risks, you could still race thinking that you're better than the others and that could increase the chances of X-risk.
Thanks a lot for engaging with my arguments. I still think that you're substantially overconfident about the positive aspects of communicating AGI X-risks to the general public but I appreciate the fact that you took the time to consider and answer to my arguments.