Nuclear risk research ideas: Summary & introduction

By MichaelA🔸, Will Aldred @ 2022-04-08T11:17 (+103)

Note: I haven’t put as much time into this series of posts as I’d like, but thought it’s better to post them in their current form than never post at all. Also, I mostly wrote these posts in late 2021, and I haven’t attempted to update them in light of (a) the 2022 Russian invasion of Ukraine or (b) the New Nuclear Security Grantmaking Programme at Longview Philanthropy. I’d now encourage readers to seriously consider applying to Longview’s roles (if they’re still open when you read this) and to have Longview in mind as a key potential user/hirer/ally for nuclear risk work readers might do.[1]

Summary & the list of ideas 

This series of posts outlines possible research projects that I think would be tractable and could substantially help us work out (a) how much to prioritize nuclear risk reduction relative to other important problems and (b) what interventions to prioritize within the area of nuclear risk reduction. There are also various forms of support that may be available to someone interested in pursuing these projects, which I overview later in this introductory post.

Each project idea should make sense by itself, so you should feel free to read only this post and then the post(s) for any particular idea(s) you’re interested in.[2] For most of the project ideas, I briefly discuss how the project could be tackled, why it might be useful, what sort of person might be a good fit for it, and whether people in the effective altruism community (“EAs”[3]) should try to fund/convince people outside the EA community (“non-EAs”) to do this project.[4] I also list previous work that could be worth reading and people that could be worth talking to.[5]

What follows is a table listing the main project ideas in this series along with my rough, subjective, low-confidence bottom-line views about how important, tractable, neglected, and “outsourceable” each idea is. The tractability scores take into account the abandoned drafts or notes that I know other researchers (such as me) have already written on some of these topics and that could be used as a starting point by a new researcher. The neglectedness scores take into account work that’s already been done, is in progress, or seems likely to be done by default. The outsourceability scores are intended to capture how well I imagine funding/convincing non-EAs to do a given project would work. 

But this table doesn’t capture the person-specific considerations of personal fit, testing fit, or building career capital; as such, you should think for yourself about those considerations rather than blindly following this table’s ratings of each idea. I also haven’t tried to capture how long each project would take, partly because each project could take many different forms and levels of extensiveness.   

The table links to posts on each of the individual ideas.

Project ideaImportanceTractabilityNeglectednessOutsourceability
Intermediate goals for nuclear risk reductionHighMediumMedium/HighMedium
Technological developments that could increase risks from nuclear weaponsMedium/HighMediumMediumMedium
Climate, agricultural, and famine effects of nuclear conflictHighMedium/LowMediumMedium/High
How should EAs react to funders pulling out of the nuclear risk space?MediumMedium/HighMedium/LowMedium
Impact assessment of nuclear-risk-related orgs, programmes, movements, etc.MediumMediumMediumMedium
Overview of nuclear-risk-related projects and stakeholdersMedium/LowHighMedium/LowMedium
Nuclear EMPsMediumMediumMediumMedium
Polling or message testing related to nuclear risk reduction and relevant goals/interventionsMediumMediumMediumMedium/High
Neartermist cost-effectiveness analysis of nuclear risk reductionMedium/LowMedium/HighMedium/LowLow
Direct and indirect effects of nuclear falloutMedium/LowMediumMedium/LowMedium
How bad would the worst plausible nuclear conflict scenarios be?Medium/LowMediumMedium/LowLow


There’s also an Appendix containing additional ideas that didn’t make it into the post, either simply because I ran out of time to look into or explain them properly or because I tentatively believe they’re lower priority.

Background for this set of ideas

I’m very unsure how many people and how much funding the effective altruism community should be allocating to nuclear risk reduction or related research, and I think it’s plausible we should be spending either substantially more or substantially less labor and funding on this cause than we currently are (see also Aird & Aldred, 2022a).[6] And I have a similar level of uncertainty about what “intermediate goals”[7] and interventions to prioritize - or actively avoid - within the area of nuclear risk reduction (see Aird & Aldred, 2022b). This is despite me having spent approximately half my time from late 2020 to late 2021 on research intended to answer these questions, which is - unfortunately! - enough to make me probably among the 5-20 members of the EA community with the best-informed views on those questions.

And while doing that research, I’ve collected or generated many ideas for further research projects that I think could help us make substantial progress towards better understanding how much to prioritize nuclear risk reduction and/or what to prioritize within this area. These are projects I’d have been keen to do if I had the time or the relevant skills[8] and that I’m keen for someone else to do.

Support that people who want to work on these projects may be able to get

(See also my Notes on EA-related research, writing, testing fit, learning, and the Forum. Also note that the following options are neither exhaustive nor mutually exclusive.)

Caveats

Some general points about theories of change and methods 

These project ideas vary in their theories of change and the best methods for tackling them. However, I think the following breakdown of broad, abstract, not-mutually-exclusive theories of change would often be useful to have in mind:[10]

(Of course, other breakdowns of the possible theories of change for these projects would also be possible and could be more useful.) 

Relatedly, it also seems worth bearing in mind that many of these projects could be tackled “simply” by finding, aggregating/summarizing, analyzing, and/or drawing inferences from existing work or experts’ views,[11] such as via:

Other potentially “low-effort” methods that could be used for these projects include:

This variety of possible theories of change and methods is part of why it’s hard to say how long a given project is likely to take; it depends on what approach a given person aims to take to the project.

Acknowledgements 

My work on this series of posts was supported by Rethink Priorities. However, I ended up pivoting away from nuclear risk research before properly finishing the posts I was writing, so I ended up publishing this in a personal capacity and without having time to ensure it reached Rethink Priorities’ usual quality standards. 

I’m very grateful to Will Aldred for a heroic bout of editing work to ensure my rough drafts finally made it to publication. I’m also grateful to David Denkenberger, Dewi Erwan, Jeffrey Ladish, and Linch Zhang, Luisa Rodriguez, and Peter Wildeford for helpful discussion or feedback on drafts of this post or on my earlier lists of nuclear risk research project ideas. Mistakes are my own.

  1. ^

    Disclaimer: Rethink Priorities, where I work, has received some funding from Longview Philanthropy. However, I’m confident I would’ve in any case believed it makes sense to encourage readers of this post to consider applying to Longview’s roles and considering Longview as a potential user/hirer/ally. This is because (1) Longview might (to my knowledge) currently be the effective-altruism-aligned funder with the greatest degree of focus on nuclear risk issues, and (2) one of the main conclusions I formed from my 2021 nuclear risk research was that an EA funder such as Longview should hire a grantmaker focused on nuclear risk issues, and I formed that view before learning Longview intended to do this. 

  2. ^

    In reality, many of these “project ideas” are really more like broad areas, directions, or umbrellas that could contain many possible projects or specific directions within them. 

  3. ^

    I use the terms “EAs” and “non-EAs” as shorthands for convenience. I don’t mean to imply that all people in those groups would identify with those labels or that there’s a sharp distinction between those groups.
     

  4. ^
  5. ^

    Specifically, I list work that could be worth reading other than just the work already cited in the section on the project in question. Likewise, I list people that could be worth talking to other than just me or authors of the works cited in the section.

  6. ^

    By “allocated to nuclear risk reduction or related research”, I mean allocated to direct or indirect nuclear risk reduction work, work to figure out how much to prioritize this area, work to figure out what to prioritize within this area, or work aimed at supporting any of those other types of work. I’m not including work that’s superficially about nuclear risk but is primarily intended to achieve other goals, such as working on nuclear risk primarily to build skills for later doing AI governance work (on that topic, see Aird, 2022)

    For what it’s worth, my 90% subjective confidence intervals are that the EA community, at its current size, should probably be allocating somewhere between 2 and 1000 community members and somewhere between $200,000 and $100 million per year to nuclear risk reduction or related research. But those are of course very wide ranges!

    These ranges are my 90% subjective confidence intervals for what my best guess would be after (1) a total of a year of full-time research was done on some of the topics I list in this post by people who are a good fit for them, and then (2) I carefully read and think about the outputs from that. These subjective confidence intervals aren’t completely pulled out of nowhere - I spent about 90 minutes trying to work out reasonable intervals, and have thought about similar questions for the past year - but they still feel fairly made-up and unstable. 

    For information on the current size of the effective altruism community, see Todd (2021).

    Here I’ve focused only on the time and money of EA community members, since those two resources seem especially important and relatively easy to think about. But one could also consider other resources, such as political capital.

  7. ^

    By an intermediate goal, I mean a goal that (1) is more specific and directly actionable than a goal like “reduce nuclear risk”, (2) is of interest because advancing it might be one way to advance a higher-level goal like that, but (3) is less specific and directly actionable than a particular intervention (e.g., “advocate for the US and Russia to renew the INF Treaty”).

  8. ^

    The single biggest reason why I won’t end up doing these projects myself is that Rethink Priorities (where I work) was offered a large grant to build a large AI governance team, and my manager and I agreed that it made sense for me to pivot to helping manage that team. 

    Additional reasons why I won’t end up doing these nuclear risk projects myself include that I’m just one person and that I lack some relevant bodies of knowledge, skills, and connections (e.g., in national security, international relations, or climate modeling).

  9. ^

    For example, some projects may have already been sufficiently addressed in existing work that I’m not aware of, or their answers may be sufficiently “obvious” to relevant decision-makers but just not to me. 

  10. ^

    See also Aird (2021).

  11. ^
  12. ^

abukeki @ 2022-04-13T02:23 (+14)

Hey Michael, sorry I didn't get around to commenting on this before you published haha. Long thought dump below:

I'm not sure if they count as "technological developments", but 2 of the largest things I see contributing to nuclear risk are development of ballistic missile defence (BMD) and proliferation of tactical nuclear weapons (TNWs).

The dangers from BMD are manyfold. One is being the cause of a conventional conflict. E.g. As the US continues to develop its maritime ICBM intercept capability, it'll pose a major threat to foreign arsenals. If a significant number of USN ships surround China, it may feel the need to sink them preemptively to ensure its ICBMs can get through. Russia has threatened similar against the land-based counterpart.[1] Some even want to revive Brilliant Pebbles over the New Hysterical Threat in the east. Given that it (and SDI in general) were the most destabilizing things in history (you think the USSR would've just sat there and watched as it lost its strategic deterrence?), that's not great.
Another risk is making a first strike by either party likelier. As said, a nation will behave unpredictably if it sees its nuclear capability slipping away (now or never?). And something often pointed out is even if the BMD system doesn't work, as long as leaders believe it might, they may be emboldened to strike (hoping their BMD can mop up the rest).
Lastly, it'll spur nations to develop ever more destabilizing offensive systems to maintain their deterrence.[2] Crazy nuclear-powered cruise missiles like Skyfall are just the start, I have some (very infohazardous) ideas for better nuclear delivery others may have thought of too. Also, one way to counter BMD is simply to increase arsenal like China's doing, as it's generally cheaper to make delivery than interception systems. Risks from larger stockpiles are obvious.

On TNWs: I think the odds of nuclear war would be far lower if they didn't exist. The fact that they're always present as an option during any conventional conflict makes the odds of crossing the nuclear threshold so much higher than if only strategic weapons existed, it's hard to overstate. For instance, the US is developing (additional) undersea TNW options clearly intended to be used against China over Taiwan.[3] Biden even called it a "bad idea" while campaigning but is now forging ahead with it. In fact one of my likeliest concrete NW scenarios is one of US first use after its in-theatre bases and fleet suffer decimation at Chinese hands in such a war. Weapons on the Chinese side exacerbate this, like the DF-26 IRBM, but those are purely in response to US TNWs.[4] You can't just expect them to have no response: indeed, lack of a symmetrical response ability is likely more destabilizing. Under China's previous city-buster-only force composition, the US would be both likelier to employ TNWs without credible threat of in-kind retaliation, and if China did retaliate it'd escalate straight to countervalue.

Agree with the goal of reducing silos: they are highly destabilizing, as I've written before. Stationary targets are vulnerable, even if you try to mitigate this with Launch on Warning (LoW). Stealthy cruise missiles are one such threat. Another is an obscure idea called "x-ray pindown" that could suppress them from firing by continuously detonating warheads overhead; it should be possible to combine it with other counterforce weapons[5] and destroy the silos, thus defeating LoW. There's an exceptional 11-pg analysis from SciAm in 1984 diving deep into the problems with LoW including pindown (I have the full copy for anyone interested). If you could reduce or eliminate silos and have the nuclear powers' force compositions switched entirely to mobile & sub, that would greatly improve strategic stability & lower nuclear risk, as there would be almost no targets vulnerable to a first strike[6]. Downside is if one still broke out it may go countervalue more easily due to there being fewer rungs on the escalation ladder, from the paucity of meaningful counterforce targets.[7] But this consideration probably isn't large due to being outweighed by the elimination of the overwhelming majority of accidental nuclear risk (no more LoW), plus they can still always target military bases, there'll never be a shortage of those. Also, you mentioned you're "unsure where Chinese mobile warheads are", well the answer is they're in PLARF bases in the middle of nowhere, which is great from the smoke-producing perspective.[8]

On the point about reducing entanglement: Much has been made about intermingling of conventional/nuclear forces, like China's "hot-swappable" DF-26. But I think it's not as huge a problem as claimed. These are not intercontinental-range forces, they would be flying in-theatre, not headed towards the enemy homeland, so e.g. the US could afford to wait to see whether it was conventional before retaliating since it wouldn't be threatening their own nuclear forces, in contrast to an inbound ICBM where you'd have minutes to decide whether to launch on warning or risk losing your own. Similarly, even if the US targeted China's IRBMs (unlikely anyway because they're mobile), China probably wouldn't feel that its strategic deterrent were being degraded (and thus much use-it-or-lose-it pressure).

On hypersonics: I think this excellent chart is the most concise thing I've seen to dispel the hype around them. They don't really contribute to nuclear risk imo[9], in fact they do the opposite by preserving deterrence. The associated report is a great read.

ASATs are indeed dangerous. So much so that if we saw mass satellite warfare, I'd expect a high chance of a nuclear exchange following shortly. Once your space based early warning is blinded, you'd be extremely vulnerable & face considerable pressure to use (indeed countries may interpret it as a prelude to first strike & launch). This is only compounded as HGVs continue to proliferate & make up a larger portion of deployed arsenals because of their lower flight path, enemy land-based forces will be sitting ducks without early warning satellites.

This paper is a good resource on your question of whether Poseidon is salted. Remember that the neutrons used to activate cobalt would've otherwise been hitting a fissile/fissionable uranium jacket, so they'd be incurring a massive yield handicap.

A lot of nuclear risk reduction just strikes me as highly intractable though. It just seems that if a nuclear war does happen, it'll be due to forces and military pressures much too strong and preordained for us to influence one way or another. E.g., these are similarly implausible:
-Making the big 3 give up their silos. Lots have tried and failed in the US. Even unlikelier for China: it's an attractive basing option as cheap install capacity for a country looking to expand arsenal + great for soaking up enemy warheads.
-For the US (the main developer of BMD) to give up its obsession with neutralizing the nuclear threat its enemies pose it.[10]
-Eliminating TNWs (that have been around for 70 years): Want the US to? Good luck with that when Russia has thousands of them (and relies on them due to conventional military weakness).
-Prevent countries from developing ASAT: countries are even willing to incur costs like worldwide condemnation & endangering their allies to strengthen this key capability. No great power will stop developing in the space domain.
Even something as simple as a NFU pledge has proven impossible, even more of a political nonstarter after recent events. Overall, I think it's a great decision for EAs to focus more on AI. There's already a huge community of capable folks working on risk reduction from every angle. Some pathways to x-risk from nukes also seem quite unrealistic. E.g. Alexey Turchin voiced to me concerns about gigaton-scale salted bombs. There's no way a single salted bomb large enough to be x-risky would be deliverable, so the country would have to detonate it on their own territory ("backyard delivery"). But it's very unlikely (and unprecedented) for countries to devise strategies explicitly harming let alone sterilizing their own nations. The optimal cost-effective strategy that maximizes deterrence while minimizing self-harm the great powers have stabilized on is simply many warheads that maximize blast radius (i.e. the typical diversified MIRV'd ICBM forces you see today), which is why you haven't seen any doomsday devices created. Seeing as the main one left is nuclear winter, I guess that should be the main focus of research despite the (imo) low probability. One potential research project I'd like to see is assessing the independent components of the hypothesis, i.e. doing a complete analysis and incorporating all available evidence into each one (e.g. generating an updated probability in light of the observations of Gulf oil well plumes, wildfires etc. for the lofting assumption) for optimal probabilities of each independent piece, then multiplying them together, to get overall conjunctive likelihoods. Allfed's Denkenberger said he got 20% chance of agricultural collapse using a Monte Carlo model, I haven't gotten to read the paper yet but I'm significantly lower, maybe ~5%? Another potential thing to investigate longer-term is I guess detection of SSBNs (as that's probably the largest conceivable strategic stability killer, other than an infallible BMD system), but again massive public & private (MIC) R&D will always be going into improving & ensuring the survivability of their undersea deterrents so I don't see how we'd help.

Finally, I've been thinking about your 3rd point in the 9 mistakes doc. Maybe there's a tendency for wars between nuclear powers to draw in the rest, if after suffering enough damage, nations decide it's best to bring others down with them. E.g., perhaps China, after being devastated in an exchange with the US in 2028 where each expended 1000 warheads, decides to glass regional enemy India with its remaining 500 to ensure it doesn't emerge unscathed as a massive future threat, calculating that cost of the additional Indian retaliation isn't meaningful in light of the destruction already suffered.[11] I can easily see the US doing this to China after a full exchange with Russia (that China wasn't involved in), to ensure that even if it's gonna lose its hegemony, the hated ChiComs won't take its place. So who knows, scenarios depicting an exchange confined to only 2 powers may not be so realistic after all.[12][13] The only scenario this "cascading nuclear war" fear of mine doesn't seem at all plausible in is if a large nuclear power is attacked by a small one: e.g. if the US is only hit by a couple dozen NK nukes, I can't see them knowingly choose to nuke China too and forfeit the rest of their country.


  1. "Moscow has threatened to attack Aegis Ashore installations preemptively in a crisis or conflict" ↩︎

  2. To expect to ever actually win the BMD arms race for good is a faint hope due to the offence-defence imbalance: it's inherently easier to make something than to make something that stops that thing. ↩︎

  3. "This is no fantasy: the U.S. military is already developing nuclear-tipped, submarine-launched cruise missiles that could be used for such purposes." Note they already have W76-2 low-yield warheads which can also be used in this role; technically they could even use higher-yield strategic weapons on a Chinese invasion force if they wanted. ↩︎

  4. "By late 2018, PRC concerns began to emerge that the United States would use low-yield weapons against a Taiwan invasion fleet, with related commentary in official media calling for proportionate response capabilities" (which have since entered service). And it's not like those fears are remotely unreasonable. ↩︎

  5. cruise missiles, SLBMs fired from another angle or slipping in between the high-altitude detonations, etc. ↩︎

  6. Excepting air bases, but you can mitigate this with 24/7 air patrols, as the US did for quite a long time initially in the cold war ↩︎

  7. Kind of like regressing to the original countervalue-only doctrine in the early cold war when counterforce wasn't yet an option due to technical limitations (missile accuracy etc.) ↩︎

  8. Those TELs (Transporter Erector Launchers) would be sent out and dispersed throughout the hinterlands in times of high tension to protect against first strikes. ↩︎

  9. at least not on their own, see next paragraph ↩︎

  10. Understandable of course, making peace with the fact that hated enemies like China can annihilate you at any time and the only way prevent it is deterrence is a tough psychological ask for any nation, much less one as exceptionalist as the US; they'll always struggle against this and have a natural desire to pursue better self-protections. ↩︎

  11. After which India does the same to Pakistan by the same logic, and so on ↩︎

  12. A similar idea is a US nuclear attack on either Russia or China causing the other to also launch on warning (because e.g. they don't know the BMs aren't aimed at them), but this comment is long enough already. ↩︎

  13. Further evidence: "...one of the more egregious features of nuclear target planning, can’t remember pre-SIOP or in early editions, was that targets in China would be attacked regardless of whether Beijing was complicit in whatever Moscow was doing that triggered the attack." I imagine other countries' planning is similar. ↩︎

MichaelA @ 2022-04-08T11:28 (+14)

I'd be interested in hearing whether people think it'd be worth posting each individual research doc - the ones linked to from the table - as its own top-level post, vs just relying on this one post linking to them and leaving the docs as docs. 

So I'd be interested in people's views on that. (I guess you could upvote this comment to express that you're in favour of each research idea doc being made into a top-level post, or you could DM me.)

Vasco Grilo @ 2022-11-17T13:55 (+3)

Hi Michael,

Thanks for taking the time to write this!

To which extent to you think preparing for nuclear winter could increase the risk of nuclear war (likelihood or severity), and therefore be an information hazard? I would be happy if you could point me to any research that looks into this. As of now:

Michael_Wiebe @ 2022-06-06T23:59 (+3)

I’m very unsure how many people and how much funding the effective altruism community should be allocating to nuclear risk reduction or related research, and I think it’s plausible we should be spending either substantially more or substantially less labor and funding on this cause than we currently are (see also Aird & Aldred, 2022a).[6] And I have a similar level of uncertainty about what “intermediate goals”[7] and interventions to prioritize - or actively avoid - within the area of nuclear risk reduction (see Aird & Aldred, 2022b). This is despite me having spent approximately half my time from late 2020 to late 2021 on research intended to answer these questions, which is - unfortunately! - enough to make me probably among the 5-20 members of the EA community with the best-informed views on those questions. [bold added]

This is pretty surprising to me. Do you have a sense of how much uncertainty you could have resolved if you spent another half-year working on this?