8 possible high-level goals for work on nuclear risk

By MichaelA🔸 @ 2022-03-29T06:30 (+46)

Summary

For people aiming to do the most good they can, what are the possible high-level goals for working on risks posed by nuclear weapons? Answers to this question could inform how much to invest in the nuclear risk space, what cruxes[1] we should investigate to determine how much and in what ways to work in this space, and what specific work we should do in this space.

I see eight main candidate high-level goals, in three categories:

  1. Longtermist & nuclear-focused: Reducing nuclear risk’s contribution to long-term future harms
    1. Direct: Reducing relatively direct, foreseeable paths from nuclear risk to long-term harms
    2. Indirect: Reducing more indirect/vague/hard-to-foresee paths from nuclear risk to long-term harms
  2. Longtermist & not nuclear-focused: Gaining indirect benefits for other EA/longtermist goals
    1. Career capital: Individuals building their career capital (knowledge, skills, credibility, and connections) to help them later work on other topics
    2. Movement strengthening: Building movement-level knowledge, credibility, connections, etc. that pay off for work on other topics
    3. Translatable knowledge: Developing research outputs and knowledge that are directly useful for other topics
    4. Movement growth: Improving the EA movement’s recruitment and retention (either narrowly - i.e. among expert communities - or broadly) by being seen to care about nuclear risks and/or by not being seen as dismissive of nuclear risks
    5. Epistemic hygiene: Improving EAs’[2] â€śepistemic hygiene” by correcting/supplanting flawed EA work/views
  3. Neartermist & nuclear-focused: Reducing neartermist harms from nuclear weapons

I expect we should put nontrivial weight on each of those high-level goals. But my current, pretty unstable view is that I’d prioritize them in the following rank order: 

  1. longtermist & nuclear-focused, both direct and indirect
  2. career capital and movement strengthening
  3. translatable knowledge and movement growth (especially the “narrow version” of the movement growth goal)
  4. neartermist & nuclear-focused
  5. epistemic hygiene

(Note: Each of the sections below should make sense by itself, so feel free to read only those that are of interest.)

Why did I write this?

With respect to the nuclear risk space, I think people in the effective altruism community are currently unsure about even what very high-level goals / theories of change / rationales we should focus on (let alone what intermediate goals, strategies, and policies to pursue). More specifically, I think that: 

I felt that it would be useful to collect, distinguish between, and flesh out some possible high-level goals, as a step toward gaining more clarity on:[3]

  1. How much to invest in the nuclear risk space
    • E.g., if we’re mostly focused on using nuclear risk as a “training ground” for governance of other risky technologies, perhaps we should also or instead focus on cybersecurity or other international relations topics?
  2. What cruxes we should investigate to determine how much and in what ways to work in this space
    • E.g., is the crux how likely it is that nuclear winter could cause existential catastrophe and how best to prevent such extreme scenarios, or whether and how we can substantially and visibly reduce the chance of nuclear war in general?
  3. What specific work we should do in this space

Epistemic status

I drafted this post in ~3 hours in late 2021. In early 2022, Will Aldred (a collaborator) and I spent a few hours editing it.[4] I intend this as basically just a starting point; perhaps other goals could be added, and definitely more could be said about the implications of, arguments for, and against focusing on each of these goals. 

I expect most of what this post says will be relatively obvious to some readers, but not all of it to all readers, and having it actually written down seems useful. 

Please let me know if you’re aware of existing writings on roughly this topic!

1. Reducing nuclear risk’s contribution to long-term future harms

(Meaning both existential catastrophes and other negative trajectory changes.)

1a. Reducing relatively direct, foreseeable paths from nuclear risk to long-term harms

An unusual subtype of this goal: Reducing the risk nuclear weapons detonations/threats being used as a tool that helps enable AI takeover

1b. Reducing more indirect/vague/hard-to-foresee paths from nuclear risk to long-term harms

2. Gaining indirect benefits for other EA/longtermist goals

General thoughts on this category

2a. Individuals building their career capital to help them later work on other topics 

2b. Building movement-level knowledge, credibility, connections, etc., that pay off for work on other topics

2c. Developing research outputs and knowledge that are directly useful for other topics 

2d. Improving the EA movement’s recruitment and retention

General thoughts on this goal

Narrow version

Broad version

2e. Improving EAs’ “epistemic hygiene” by correcting/supplanting flawed EA work/views

3. Reducing neartermist harms from nuclear weapons

Conclusion

I try to keep my bottom lines up front, so please just see the Summary and “Why did I write this?”!

Acknowledgements 

My work on this post was supported by Rethink Priorities. However, I ended up pivoting away from nuclear risk research before properly finishing the various posts I was writing, so I ended up publishing this in a personal capacity and without having time to ensure it reached Rethink Priorities’ usual quality standards. 

I’m very grateful to Will Aldred for a heroic bout of editing work to ensure this and other rough drafts finally made it to publication. I’m also grateful to Avital Balwit, Damon Binder, Fin Moorhouse, Lukas Finnveden, and Spencer Becker-Kahn for feedback on an earlier draft. Mistakes are my own.

  1. ^

     I.e., crucial questions or key points of disagreement between people. See also Double-Crux

  2. ^

    In this post, I use “EAs” as a shorthand for “members of the EA community”, though I acknowledge that some such people wouldn’t use that label for themselves.

  3. ^

    I see this as mostly just a specific case of the general claim that people will typically achieve their goals better if they have more clarity on what their goals are and what that implies, and they develop theories of change and strategies with that explicitly in mind. 

  4. ^

    We didn’t try to think about whether the 2022 Russian invasion of Ukraine should cause me to shift any of the views I expressed in this post, except in that I added in one place the following point: “E.g., maybe the concern is mostly about shrinking or disrupting the EA movement and its work, since that in turn presumably raises existential risk and other issues? If so, perhaps strikes against cities with large numbers of EA orgs or people would be especially problematic and hence especially important to prevent?”

    We also didn’t try to think about whether the New Nuclear Security Grantmaking Programme at Longview Philanthropy should cause me to shift any views expressed in this post, but I’d guess it wouldn’t.

  5. ^

    Here are my paraphrased notes on what one of these people said: 

    “Also, in [org’s] experience, nuclear war seems to be a topic that presents compelling engagement opportunities. And those opportunities have a value that goes beyond just nuclear war.

    • This area is quite amenable to [org] getting high-level policymaker attention
      • Nuclear war has always been something that gets top level policymaker attention
      • In contrast, for climate change, it’s too crowded [or something like that - I missed this bit]
      • [...]
      • It’s relatively easy to get to the forefront of the field for nuclear risk work
    • And then you can also leverage those connections for other purposes
    • And then you have a good space to talk about the global catastrophic risk framing in general
    • Also, having skill at understanding how nuclear security works is a useful intellectual background which is also applicable to other risk areas
    • [...] [This person] might even recommend that people who want to work on AI and international security start off by talking about the AI and nuclear intersection
      • That intersection is currently perceived as more credible”

    See also these thoughts from Seth Baum.

  6. ^

    I say “credibility” rather than “credentials” because I don’t just mean things like university degrees, but also work experience, a writing portfolio, good references, the ability to speak fluently on a given topic, etc.

  7. ^

    See Baum for a somewhat similar claim about “Slaughterbots” specifically. 

  8. ^

    It seems possible Open Phil have relevant data from their ​​Open Phil EA/LT Survey 2020, and/or that data on this could be gathered using approaches somewhat similar to that survey.

  9. ^

    This connects to the topic of the Value of movement growth.


MichaelA @ 2022-03-29T06:35 (+5)

Some additional additional rough notes:

MichaelA @ 2022-03-29T07:34 (+4)

If you found this post interesting, there's a good chance you should do one or more of the following things:

  1. Apply to the Cambridge Existential Risks Initiative (CERI) summer research fellowship nuclear risk cause area stream. You can apply here (should take ~2 hours) and can read more here.
  2. Apply to Longview's Nuclear Security Programme Co-Lead position. "Deadline to apply: Interested candidates should apply immediately. We will review and process applications as they come in and will respond to your application within 10 working days of receiving the fully completed first stage of your application. We will close this hiring round as soon as we successfully hire a candidate (that is, there is no fixed deadline)."
    1. See also New Nuclear Security Grantmaking Programme at Longview Philanthropy
  3. Browse 80k's job board with the nuclear security filter
MichaelA @ 2022-03-29T06:35 (+4)

Some additional rough notes that didn’t make it into the post

Denkenberger @ 2022-03-30T06:06 (+3)

The more weight we place on this goal, probably the less we’d focus on very unlikely but very extreme scenarios (since badness scales roughly linearly in fatality numbers for neartermists, whereas for longtermists I think there’s a larger gap in badness between smaller- and medium-scale and extremely-large-scale nuclear scenarios).

This seems right. Here are my attempts at neartermist analysis for nuclear risks (global and US focused).