Intermediate goals for reducing risks from nuclear weapons: A shallow review (part 1/4)

By MichaelA🔸, Will Aldred @ 2023-05-01T15:04 (+35)

This is a linkpost to https://docs.google.com/document/d/1hwFCLtEbkCmi3p9nuHYIe7YgRcOazLes0X7ZFr3B990/edit#

This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.

Summary

What is this post? 

This post is the first part of what was intended to be a shallow review of potential “intermediate goals”[1] one could pursue in order to reduce nuclear risk (focusing especially on the contribution of nuclear weapons to existential risk). The full review would’ve broken intermediate goals down into:

  1. goals aimed at reducing the odds of nuclear conflict or other non-test nuclear detonations
  2. goals aimed at changing how nuclear conflict plays out if it does occur (in a way that reduces its harms)
  3. goals aimed at improving resilience to or ability to recover from the harms of nuclear conflict
  4. goals that are cross-cutting, focused on field-building, or otherwise have indirect effects 

This first part of the shallow review focuses just on the first of those categories: goals aimed at reducing the odds of nuclear conflict or other non-test nuclear detonations. We tentatively think that, on the margin, this is the least promising of those four categories of goals[2], but that there are still some promising interventions in this category. 

Within this category, we review multiple potential goals. For most of those goals, we briefly discuss:

Note that, due to time constraints, this post is much less comprehensive and thoroughly researched and reviewed than we’d like. 

The intermediate goals we considered, and our tentative bottom line beliefs on them

This post and table breaks down high-level goals into increasingly granular goals, and shares our current best guesses on the relatively granular goals. Many goals could be pursued for multiple reasons and could hence appear in multiple places in this table, but we generally just showed each goal in the first relevant place anyway. This means in some cases a lot of the benefits of a given goal may be for higher-level goals we haven’t shown it as nested under. 

We unfortunately did even less research on the goals listed from From 1.1.2.6 onwards than the earlier ones, and the bottom-line views for that later set are mostly Will’s especially tentative personal views. 

 Potential intermediate goalWhat effect would progress on this goal have on nuclear risk?How easy would it be to make progress on this goal?What resources are most needed for progress on this goal?Key effects this goal might have on things other than nuclear risk?

1.1  

Reduce the odds of nuclear conflict that’s preceded by non-nuclear armed conflict

1.1.1 

Reduce the odds of (initially non-nuclear) armed conflicts involving at least one nuclear-armed state

1.1.1.1Reduce the odds of armed conflict in generalModerate reduction in riskHardUnsure.

Fewer near-term harms from non-nuclear conflict.

Less development & deployment of dangerous non-nuclear technologies?

Global governance would then be easier?

See also Tse (2018) and Koehler (2020).

1.1.1.2Reduce the odds of (initially non-nuclear) armed conflict, with a focus on those involving at least one nuclear-armed stateMajor reduction in riskHard


 

Unsure.

Similar to “1.1.1.1: Reduce the odds of armed conflict in general”
1.1.1.3Reduce proliferationModerate reduction in riskHard

Advocacy from people with strong political connections.

Labor from people suited to being diplomats or similar.

More non-nuclear conflict? Or maybe less?
1.1.1.4Promote complete nuclear disarmamentMajor reduction in riskAlmost impossible to fully achieve this unless the world changes radically (e.g., a world government or transformative artificial intelligence is created, or a great power war occurs)

Advocacy from people with strong political connections?

Labor from people suited to being diplomats or similar?

More non-nuclear conflict.

More development & deployment of dangerous non-nuclear technologies? (Since those could fill a similar role to the one nuclear weapons currently fill, and since such development and deployment may currently be deterrable via nuclear threats).

 

1.1.2 

Reduce the odds of escalation from a non-nuclear armed conflict to a nuclear conflict

1.1.2.1Promote no first use (NFU) policies/pledgesSmall reduction in riskModerately hardNo clear front-runner (i.e., each resource type is approximately equally needed).Increase the chance that allies of states adopting NFU policies suffer conventional attacks or have to make concessions to avoid that?
1.1.2.2Promote other policies/pledges limiting the scenarios under which nuclear weapons would be usedSmall reduction in riskNot very hardNo clear front-runner (i.e., each resource type is approximately equally needed). 
1.1.2.3Remove sole nuclear launch authority (or prevent states from adopting it)Small reduction in riskModerately hardNo clear front-runner (i.e., each resource type is approximately equally needed). 
1.1.2.4Reduce nuclear entanglementSmall reduction in riskModerately hardUnsure. 
1.1.2.5Reduce the odds that, given a non-nuclear conflict, there’d be one or more attacks perceived as attacks against a state’s nuclear forces/capabilitiesModerate reduction in riskModerately hardUnsure. 

1.1.2.6

Increase decision time

1.1.2.6.iPomote de-alerting and/or preventing increases in alert levelsModerate reduction in riskModerately hardAdvocacy from people with strong political connections. 
1.1.2.6.iiEliminate short-range nuclear weapons near tense bordersModerate reduction in riskModerately hard

Advocacy from people with strong political connections.

Labor from people suited to being diplomats or similar.

 
1.1.2.6.iiiImprove early warning systemsModerate reduction in riskSomewhat hardUnsure 
1.1.2.7Move away from escalate-to-deescalate or similarMajor reduction in riskVery hardLabor from people suited to being diplomats or similar? 
1.1.2.8Eliminate or reduce (or avoiding increasing) numbers of silo-based ICBMs, and possibly SLBMsMajor reduction in riskVery hard

Advocacy from people with strong political connections.

Labor from people suited to being diplomats or similar.

 
1.1.2.9Decrease (or reduce the chance of increase in) number of warheads per missile, probably just for ICBMsMajor reduction in riskModerately hard

Advocacy from people with strong political connections?

Labor from people suited to being diplomats or similar?

 
1.1.2.10Reduce the number, prominence, or likelihood of usage of tactical nuclear weaponsModerate reduction in riskModerately hard

Advocacy from people with strong political connections.

Labor from people suited to being diplomats or similar.

More non-nuclear conflict?
1.1.2.11Store strategic and nonstrategic (i.e., tactical) warheads separately

[We ran out of time]


 

[We ran out of time]


 

Advocacy from people with strong political connections.

Labor from people suited to being diplomats or similar.


 



 
1.1.2.12Prevent the deployment of the long range standoff (LRSO) air launched cruise missileModerate reduction in riskModerately hardAdvocacy from people with strong political connections. 
1.1.2.13Prevent reckless/unilateral development/deployment of missile defense systemsSmall reduction in risk?[We ran out of time]

Advocacy from people with strong political connections?

Labor from people suited to being diplomats or similar?

More non-nuclear conflict? Or maybe less?
1.1.2.14Increasing or decreasing the accuracy of nuclear weaponsModerate reduction in riskModerately hardNo clear front-runner (i.e., each resource type is approximately equally needed). 
1.1.2.15Reducing heads of states’ use of inflammatory rhetoricSmall reduction in risk?Very hard?Advocacy from people with strong connections to heads of state?Less non-nuclear conflict? Or maybe more?
1.1.2.16Reduce nuclear accident oddsMajor reduction in riskModerately hardAdvocacy from people with strong political connections? 
1.1.2.17Reduce odds of inadvertent nuclear strikesModerate reduction in riskModerately hard

Advocacy from people with strong political connections.

Labor from people suited to being diplomats or similar.

 
1.1.2.18Reduce odds of nuclear (or non-nuclear) terrorismSmall reduction in riskSomewhat hardAdvocacy from people with strong political connections? 
1.1.2.19Reduce odds that nuclear (or non-nuclear) accidents or terrorism leads to nuclear attacks by statesSmall reduction in riskModerately hardLabor from people suited to being diplomats or similar. 
1.1.2.20 Establishing hotlines between all nuclear powersModerate reduction in riskModerately hardLabor from people suited to being diplomats or similar.Less non-nuclear conflict?
 

1.2

Reduce the odds of nuclear conflict that’s not preceded by non-nuclear armed conflict

The specific goals that could help with 1.2 are almost entirely the same, with no additions, as the goals in “1.1: Reduce the odds of nuclear conflict that’s preceded by non-nuclear armed conflicts”. This means most of the goals mentioned under 1.1 could also have benefits for reducing the odds of nuclear conflict that’s not preceded by non-nuclear armed conflict.

Some of these goals could also help with 1.3 (below). 

 
1.3 Reduce the odds of a non-test nuclear detonation that’s not intentional or not by a state
1.3.1Promote partial nuclear disarmamentModerate reduction in riskHard

Advocacy from people with strong political connections.

Labor from people suited to being diplomats or similar.

Slightly more non-nuclear conflict?
1.3.2Reduce the odds or size of increases in numbers of nuclear weapons in already-nuclear-armed statesModerate or major reduction in risk?

Moderately hard?


 

(This goal seems to me more neglected than partial disarmament, complete disarmament, and non-proliferation, which may mean there are more low-hanging fruit here.)

Advocacy from people with strong political connections?

Labor from people suited to being diplomats or similar?

Slightly more non-nuclear conflict?

Some high-level takeaways and observations

0. Introduction & why this post may be useful   

Nine countries possess a total of roughly 13,000 nuclear weapons (FAS, 2021). The chance of nuclear conflict by 2050 appears to be around 25%.[9] If nuclear conflict does occur, it might involve the targeting of cities, the use of hundreds or thousands of warheads, immediate fatalities in the tens of millions or higher, and/or severe climate effects resulting in billions of deaths due to famine.[10] It seems unlikely that even such severe consequences would cause an existential catastrophe, but the odds still seem to be uncomfortably high, and seem sufficient to make nuclear war one of the largest sources of existential risk.[11] 

But what can be done about this? What “intermediate goals” could a person support in order to reduce nuclear risk? By what mechanisms and to what extent would progress on each goal reduce - or accidentally increase - nuclear risk? What interventions are available for supporting each intermediate goal, what types of resources do they most need, and how much progress should be expected if a given level of those resources is devoted to this goal?

This post is intended to help people think about those questions, which seems crucial when a person is considering (1) whether to focus on reducing nuclear risk or instead on other issues or (2) what to focus on within the broad area of reducing nuclear risk. Here are some ways readers could use this post:

Before proceeding, some caveats are in order:     

Finally, a few points that may be useful to bear in mind:

Epistemic status

In 2021, Michael did some initial research for this post and wrote an outline and rough notes. But he pivoted away from nuclear risk research before having time to properly research and draft this. We (Michael and Will) finished a rough version of this post in 2022, since that seemed better than it never being published at all, but then didn’t get around to publishing until 2023. As such, this is just a very incomplete starting point, may contain errors, and may be outdated (e.g., it was mostly written before the Russian invasion of Ukraine).

This shallow review gets decreasingly detailed and carefully researched as it proceeds, due to Michael to some extent working on this from top to bottom and then facing time constraints. This doesn’t indicate that the goals that come earlier in the shallow review are more promising or have more possible subgoals. 

How to engage with this post

The full post can be found here. It contains many extensive quotes without added commentary from us, some of which are relevant to multiple sections and hence repeated. There’s a section for each potential goal/subgoal, and each section should make sense by itself. So readers should feel free to just skim, only look at the sections that are of interest to them, and skip repeated quotes.

Acknowledgements

Michael’s work on this post was supported by Rethink Priorities, though he ended up pivoting to other topics before having time to get this up to RP's usual standards. Will helped with the research and editing in a personal capacity. We’re grateful to Eva Siegmann, Matthew Gentzel, and Peter Wildeford for helpful feedback on earlier drafts. Mistakes are our/Michael’s own.

If you are interested in RP’s work, please visit our research database and subscribe to our newsletter

Click here to see the full post.

  1. ^

    By an intermediate goal, we mean a goal that (1) is more specific and directly actionable than a goal like “reduce nuclear risk”, (2) is of interest because advancing it might be one way to advance a higher-level goal like that, but (3) is less specific and directly actionable than a particular intervention (e.g., “advocate for the US and Russia to renew the INF Treaty”).

    I adopted the term “intermediate goal” from Muehlhauser (20202021), who doesn’t provide a definition but does give examples that illustrate the concept. The definition proposed here is my own.

    Karnofsky (2022) also discusses a similar concept:

    How much should one value “transformative AI is first developed in country A” vs. “transformative AI is first developed in country B”, or “transformative AI is first developed by company A vs. company B”, or “transformative AI is developed 5 years sooner/later than it would have been otherwise?”

    If we were ready to make a bet on any particular intermediate outcome in this category being significantly net positive for the expected value of the long-run future, this could unlock a major push toward making that outcome more likely. I’d guess that many of these sorts of “intermediate outcomes” are such that one could spend billions of dollars productively toward increasing the odds of achieving them, but first one would want to feel that doing so was at least a somewhat robustly good bet.

  2. ^

    See also Philanthropy to the Right of Boom. Unfortunately we ran out of time to properly write up thoughts on the other categories of goals. Our very rough and incomplete notes on them can be found here.

  3. ^

    The area of nuclear risk seems more prone than many other areas to harmful and non-obvious results from well-intentioned and reasonable-sounding goals or interventions. This is due in large part to the important role deterrence plays. For example, limiting the likely scale and harms of nuclear war could weaken deterrence and might therefore make nuclear war more likely.

  4. ^

    In reality, “progress” comes in degrees rather than being a binary variable, which makes our statements about the effects and difficulty of “making progress” less meaningful and more fuzzy. A smaller issue is that the usefulness, harmfulness, resources required, and tractability of making progress on a goal could differ depending on how much progress has already been made (e.g., it could be easy and highly impactful to make initial progress, but hard and/or net-negative to get close to “fully achieving” the goal). 

  5. ^

    For this post, we divide “resources” and “support” into the following types: 

    -Funding

    -Labor from people with particular skills (e.g., people suited to being diplomats)

    -Advocacy from people with large followings (e.g., celebrities encouraging the public to contact politicians about some policy)

    -Advocacy from people with strong political connections (e.g., people who would be listened to by senior US or Russian officials)

    Of course, in reality, one could also subdivide those types or consider other types of resources. 

  6. ^

    By this we mean how easily additional resources could lead to additional progress, relative to what would be the case otherwise. So, in terms of the ITN framework, this accounts for both tractability and neglectedness.

  7. ^

    Really this would all depend a lot on the specific version of the goal in question one pursues (e.g., what states were focused on), the specific interventions one supports, the precise resources one can devote to the goal, etc. 

  8. ^

    Unfortunately this is based mostly on reasoning that we haven’t written up, rather than what we captured in this post. But for some relevant reasoning, you could see Philanthropy to the Right of Boom and/or our very rough draft of parts 2-4/4 of this shallow review.

  9. ^

    This is because the annual chance of nuclear conflict appears to be in the ballpark of 1%, and we would estimate the same annual chance for each year of the coming decades (because we’re very unsure whether it’ll rise, fall, or remain constant). Given that, the chance that there won’t be a single nuclear conflict by 2050 would be 1 - 0.99^28 = 24.5%.

    These estimates are informed by or align with the lines of evidence collected by Rodriguez (2019) and forecasts on the following set of questions on the site Metaculus: 12345. Caveats include that many of those pieces of evidence are about outcomes more specific than “nuclear conflict” (suggesting they would be under-estimates), the different pieces of evidence suggest somewhat different chances, and each piece of evidence has notable limitations. But they collectively suggest that the annual chance is quite likely to be somewhere between 0.3% and 3%. 

    See also Michael’s draft Database of nuclear risk estimates [draft]

  10. ^
  11. ^

    See Michael’s draft Database of nuclear risk estimates and (to compare with other risks) Michael’s earlier Database of existential risk estimates (or similar).

    Additional relevant commentary can be found in Aird (2020)Beckstead (2015)Ladish (2020)Rodriguez (2019), and Rodriguez (2020).

  12. ^

     We think pieces that discuss only a prioritized set of goals, are more comprehensive at the level of interventions, or go into depth on some particular goals or interventions are also useful and are complementary to this post. Pieces of those types which we appreciated - and which we’ve drawn on for this post - include Open Philanthropy (2015) and Nuclear Threat Initiative (2020)