Great Power Conflict

By Zach Stein-Perlman @ 2021-09-15T15:00 (+11)

No longer endorsed.

Imagine it's 2030 or 2040 and there's a catastrophic great power conflict. What caused it? Probably AI and emerging technology, directly or indirectly. But how?

I've found almost nothing written on this. In particular, the relevant 80K and EA Forum pages don't seem to have relevant links. If you know of work on how AI might cause great power conflict, please let me know. For now, I'll start brainstorming. Specifically:

  1. How could great power conflict affect the long-term future? (I am very uncertain.)
  2. What could cause great power conflict? (I list some possible scenarios.[1])
  3. What factors increase the risk of those scenarios? (I list some plausible factors.)

Epistemic status: brainstorm; not sure about framing or details.

 

I. Effects

Alternative formulations are encouraged; thinking about risks from different perspectives can help highlight different aspects of those risks. But here's how I think of this risk:

Emerging technology enables one or more powerful actors (presumably states) to produce civilization-devastating harms, and they do so (either because they are incentivized to or because their decisionmaking processes fail to respond to their incentives).[2]

Significant (in expectation) effects of great power conflict on the long-term future include:

Human extinction would be bad. Civilizational collapse would be prima facie bad, but its long-term consequences are very unclear. Effects on relative power are difficult to evaluate in advance. Overall, the long-term consequences of great power conflict are difficult to evaluate because it is unclear what technological progress and AI safety look like in a post-collapse world or in a post-conflict, no-collapse world.

Current military capabilities don't seem to pose a direct existential risk. More concerning for the long-term future are future military technologies and side effects of conflict, such as on AI development.

 

II. Causes

How could AI and the technology it enables lead to great power conflict? Here are the scenarios that I imagine, for great powers called "Albania" and "Botswana":

 

III. Risk factors

Great power conflict is generally bad, and we can list high-level scenarios to avoid, such as those in the previous section. But what can we do more specifically to prevent great power conflict?

Off the top of my head, risk factors for the above scenarios include:

It also matters what and how regular people and political elites think about AI and emerging technology. Spreading better memes may be generally more tractable than reducing the risk factors above, because it's pulling the rope sideways, although the benefits of better memes are limited.

 

Finally, the same forces from emerging technology, international relations, and beliefs and modes of thinking about AI that affect great power conflict will also affect:

Interventions affecting the probability and nature of great power conflict will also have implications for these variables.

 

Please comment on what should be added or changed, and please alert me to any relevant sources you've found useful. Thanks!


  1. My analysis is abstract. Consideration of more specific factors, such as what conflict might look like between specific states or involving specific technologies, is also valuable but is not my goal here. ↩︎

  2. Adapted from Nick Bostrom's Vulnerable World Hypothesis, section "Type-2a." My definition includes scenarios in which a single actor chooses to devastate civilization; while this may not technically be great power conflict, I believe it is sufficiently similar that its inclusion is analytically prudent. ↩︎

  3. Eliezer Yudkowsky's Cognitive Biases Potentially Affecting Judgment of Global Risks. ↩︎

  4. Future weapons will likely be on hair trigger for the same reasons that nukes have been: swifter second strike capabilities could help states counterattack and thus defend themselves better in some circumstances, it makes others less likely to attack since the decision to use hair trigger is somewhat transparent, and there is emotional/psychological/political pressure to take them down with us. ↩︎

  5. Currently the world doesn't include large, powerful groups, coordinated at the state level, that totally despise and want to destroy each other. If it ever does, devastation occurs by default. ↩︎

  6. Another potential desideratum is differential technological progress. Avoiding military development is infeasible to do unilaterally, but perhaps we can avoid some particularly dangerous capabilities or do multilateral arms control. Unfortunately, this is unlikely: avoiding certain technologies is costly because you don't know what you'll find, and effective multilateral arms control is really hard. ↩︎


lukeprog @ 2021-09-17T20:01 (+15)

If you know of work on how AI might cause great power conflict, please let me know

Phrases to look for include "accidental escalation" or "inadvertent escalation" or "strategic stability," along with "AI" or "machine learning." Michael Horowitz and Paul Scharre have both written a fair bit on this, e.g. here.

Zach Stein-Perlman @ 2021-09-17T20:03 (+1)

Thank you!

Peter Wildeford @ 2021-09-16T23:27 (+4)

Even without new technological development, why couldn't there be a great power war over a classical Flashpoint, like what caused past wars? Seems like a war over disputed territories in the seas near China, or disputed territories between India and Pakistan could plausibly cause a great power war.

Zach Stein-Perlman @ 2021-09-17T00:00 (+1)

It's certainly possible, and I think such analysis is valuable. It's just not my comparative advantage and not so neglected (I think). Also, I think we don't lose much analytically by separating foreseeable causes of great power conflict into two distinct categories:

  1. Conflict due to specific factors that we recognize as important today (e.g., US-China tension and India-Pakistan tension and their underlying causes)
  2. Conflict due to more general forces and phenomena (and due to my empirical beliefs, I think emerging-technology-related forces are relatively likely to cause conflict)

This post aims to start a conversation on 2 — or get people to direct me to previous work on 2.

Also to explain my focus, I would be surprised by major conflict for normal reasons by 2040 but not surprised by major conflict because the world is going crazy by 2040. But I didn't justify this. I should have mentioned my exclusion of major conflict for normal reasons in my post; thanks for your comment.

MaxRa @ 2021-09-16T06:18 (+3)

Thucydide‘s Trap by Graham Allison features a scenario of escalating conflict between the US and China in the South Chinese Sea conflict that I found very chilling. Iirc the scenario is just like you mentioned, each side doing from her perspective legitimate moves, protecting dearly hold interests, drawing lines in the sand and the outcome is escalation to war. The underlying theme is conflicting dynamics when a reigning power is challenged by a rising power. You probably saw the book mentioned, I found it very worth reading. 

And you didn‘t mention cyber warfare, which is what pops into my mind immediately. I haven‘t looked into this, but I imagine that potential damage is very high while proper international peace-supporting and deescalating norms are much more lagging behind compared to physical conflicts.

Zach Stein-Perlman @ 2021-09-17T00:30 (+4)

Thanks for your comment. US-China tension currently seems most likely to me to cause great power conflict, and cyber capabilities were mostly what I had in mind for "offense outpaces defense" scenarios. I think this post is more valuable if it's more general, though, and I don't know enough about US-China, cyber capabilities, or warfare to say much more specifically.

I think understanding possible futures of cyber capabilities would be quite valuable. I would not be surprised to look back in 2030 or 2040 and say:

Civilization was just devastated by cyberattacks. In retrospect, it should have been obvious — or rather, the rest of us should have listened to those who were sounding the alarm. Since the 2000s, it's been clear that offense is easy and defense is hard. Since the 2010s, great powers have had the capability to devastate one another's cities with cyberattacks. In the last few years, offensive capabilities strengthened and proliferated. Then dozens of agents had the capability to cause countless explosions, destroy infrastructure, and take down electric grids almost everywhere, and it was only a matter of time until unilateral or multilateral forces led one to do it.

But again, such work is not my comparative advantage (and, as a disclaimer for the above paragraph, I don't know what I'm talking about).

Denkenberger @ 2021-09-22T06:44 (+2)

From the same reference, twelve out of 16 times that there has been a switch in which is the most militarily powerful country in the world, there has been war (though one should not take that literally for the current situation). China will likely become the most powerful (economically at least) in the next few decades, unless the US allows a lot more immigration.