Research project idea: How bad would the worst plausible nuclear conflict scenarios be?

By MichaelA🔸 @ 2023-04-15T14:50 (+16)

This post is part of a series of rough posts on nuclear risk research ideas. I strongly recommend that, before you read this post, you read the series’ summary & introduction post for context, caveats, and to see the list of other ideas. One caveat that’s especially worth flagging here is that I drafted this in late 2021 and haven’t updated it much since. I’m grateful to Will Aldred for help with this series.

One reason I'm publishing this now is to serve as one menu of research project ideas for upcoming summer research fellowships.

Some tentative bottom-line views about this project idea

ImportanceTractabilityNeglectednessOutsourceability
Medium/LowMediumMedium/LowLow

What is this idea? Why might this research be useful? How could it be tackled?

This project would essentially involve working out what the worst “plausible”[1] scenarios would be and how much those would increase existential risk if they occurred. It could focus either on scenarios that could happen this year (e.g., the use of 1000 warheads on urban areas), or scenarios that could only happen if various developments occur in future (e.g., the use of more warheads, higher total yields, or more concerning types of weapons than are currently possessed; see Aird & Aldred, 2023), or both.

One rationale for this project idea is that perhaps it would suggest that even the worst plausible scenarios would increase existential risk so little in expectation that we could roughly “rule out” nuclear risk as a longtermist priority.[2] Conversely, if the project turns out to suggest that the worst plausible scenarios would increase existential risk a decent amount in expectation, or simply that we should be quite uncertain about that question, this could update some longtermists in favour of nuclear risk being a longtermist priority (if those longtermists are currently overly confident that the worst plausible scenarios would hardly increase existential risk).

Tackling this project could involve:

  1. Reading previous work and talking to people who’ve thought a lot about nuclear risk, especially work and people that are unusually pessimistic or concerned, to get a sense of:
    • what scenarios they’re more concerned about
    • what variables seem to be particularly relevant to the concerningness of those scenarios (e.g., number and yield of warheads used)
    • what estimates for those variables these works or people see as best guesses or as plausible worst cases
    • what these works or people see as the precise pathways by which these scenarios would increase existential risk
  2. Doing some independent thinking and investigation on those points
  3. Getting a sense of how much those scenarios would increase existential risk if they occurred, such as through:
    • constructing Fermi estimates
    • constructing more careful quantitative models
    • interviewing or surveying a broader range of experts (not just the especially pessimistic/concerned ones) to see how they react when presented with clear, thorough descriptions of the scenarios and the purported pathways by which they could increase existential risk

One could couple this with considering how likely these scenarios, estimates, and pathways seem. Or, to save time, one could ignore that and focus only on whether or not each thing is “plausible”.

One worry I have about this project idea is that the project could cause some readers to be too unconcerned about nuclear existential risk, if (a) project ultimately suggests that risk is low according to the models developed and (b) the reader ignores or later forgets those sources of uncertainty. Conversely, I also worry that the project could cause some readers to be too concerned, because (a) this wouldn’t be paired with an analysis of the best or least bad plausible scenarios, and (b) readers may overlook that many individually plausible estimates may be extremely unlikely to coincide (just as 10/10 die rolls landing on a one is extremely unlikely). I think both of those potential problems could be mitigated via clearly flagging various uncertainties, caveats, etc., but I’d still expect the problems to occur to some extent.

What sort of person might be a good fit for this?

I expect any good generalist researcher could provide a useful analysis of these questions. I expect someone to be a stronger fit the more they already know about various things relevant to nuclear risk (since this project would ideally address a diverse array of variables and pathways) and the more experience they have with modelling, forecasting, literature reviews, and expert elicitation.

Some relevant previous work

Other people to consider talking to

  1. ^

     To conduct this project and communicate its results, it would be important to decide what lower bound of plausibility one is focusing on. For example, at least a 5% chance, 1% chance, or 0.1% chance? And is that before or after conditioning on any nuclear conflict having occurred?

  2. ^

     That said, it seems very unlikely to me that a project of this sort, requiring less than a year, would update us towards thinking nuclear risk shouldn’t be in at least the top 25 broad priority areas for longtermists. This is partly because, even if we did end up with a strong “inside-view” case that the worst plausible case scenarios would hardly increase existential risk in expectation, I think substantial model uncertainty and “outside-view” cause for concern would remain.

    See also Beckstead (2015). The project ideas on “Risks of nuclear war triggering other catastrophes, scarring of humanity's values, or similar” is also relevant.