Research project idea: Technological developments that could increase risks from nuclear weapons
By MichaelA🔸 @ 2023-04-15T14:28 (+17)
This post is part of a series of rough posts on nuclear risk research ideas. I strongly recommend that, before you read this post, you read the series’ summary & introduction post for context, caveats, and to see the list of other ideas. One caveat that’s especially worth flagging here is that I drafted this in late 2021 and haven’t updated it much since. I’m grateful to Will Aldred for help with this series.
One reason I'm publishing this now is to serve as one menu of research project ideas for upcoming summer research fellowships. I'm publishing a few of the posts to the frontpage to raise awareness that this series exists, but I'll keep the rest as personal blogposts to avoid spamming the frontpage.
Some tentative bottom-line views about this project idea
Importance | Tractability | Neglectedness | Outsourceability |
Medium/High | Medium | Medium | Medium |
What is this idea? How could it be tackled?
Will Aldred and I wrote a shallow exploration of some technological developments that might occur and might increase risks from nuclear weapons - especially existential risk or other risks to the long-term future. For each potential development, we provided some very quick, rough guesses about how much and in what ways the development would affect the odds and consequences of nuclear conflict (“Importance”), the likelihood of the development in the coming decade or decades (“Likelihood/Closeness”), and how much and in what ways thoughtful altruistic actors could influence whether and how the technology is developed and used (“Steerability”).
However, I pivoted away from nuclear risk research before we had time to properly research and draft the post. What we finished was just a very incomplete starting point that may contain errors.
So one version of this project would be copying and “finishing” a new and improved version of our not-properly-finished report. See the Summary and Introduction to get a clearer sense of what that might look like and why it might matter. If you’re interested in doing that version of this project, please reach out to me and we can discuss whether and how to proceed with that.
A more ambitious version of this project could in various ways go beyond what we were drafting. Ways of going beyond what we were drafting include:
- Providing info on additional potential technological developments
- Finding better or complementary ways to organize the potential developments
- Providing more information about what some of the developments are, what positive and negative impacts they might have, how important they are, how likely they are to occur at various points in time, and how steerable they are
- In many cases, it might be best if this is broken down more granularly than our post did. For example, considering those variables separately for each of multiple different specific ways “detection of nuclear warhead platforms, launchers, and/or delivery vehicles” could improve, or considering the effects and likelihood differently for different potential levels of each of those developments.
- Providing much more information about what could and should be done to influence whether and how these potential developments occur and are used, and what other implications these potential developments might have for what risks and interventions to prioritize in the nuclear risk space.
- This seems very important, but our post hardly discusses it since we ran out of time to look into it properly.
- For more general discussion of possible goals and interventions related to nuclear risk, see 8 possible high-level goals for work on nuclear risk and Shallow review of approaches to reducing risks from nuclear weapons.
- Providing better (as opposed to “more”) information and “bottom-line beliefs” on those matters than my draft does (i.e., info/beliefs that are more accurate, more focused on the most important points, and less misleading)
- For example, someone could do or organize red-teaming of the post as a whole or its more important claims.
- Investigating ways the potential technological developments might also decrease nuclear risks, whether their net effect might be a decrease in nuclear risk (even if they could also have important risk-increasing effects), or other technological developments that could decrease nuclear risks
- This would change the scope of the project, rather than just filling in blanks or doing a better job within the scope of the project. But this altered scope could still naturally be pursued alongside the work discussed elsewhere in this post.
- Investigating what might happen with respect to proliferation of existing technologies or changes in how those technologies are deployed, what effects that might have, and what can and should be done to steer that.
- As with the above item, this would change the scope of the project.
- Changing the post or writing one or more new posts in such a way that it’s easier and more likely for decision-makers to (correctly) use the post when making relevant decisions (e.g., making it easier for a decision-maker to find the info that’s most relevant to them, or making other versions of the post that are tailored to particular target audiences)
- For example, cutting out potential developments that seem less worth paying attention to (e.g., because they would have small impacts or they’re very unlikely to occur in the next few decades), or providing more or less detail on each potential development.
- Disseminating insights from the post to relevant decision-makers to increase the chance they act on them
- Discovering and disseminating what various relevant actors/groups believe about these goals, to create common knowledge and aid in coordination[1]
A given project could do anywhere from just one to all ten of those ten things (though it seems probably best to limit one’s ambitions to just, say, 1-5 of those things, at least for an initial project).
I tentatively expect items 1, 4, and 6 would be the most valuable. But I also think the ideal project might include some amount of many of these items, since they might be best pursued in tandem.
And each of those items could be done with a focus on just one potential technological development, just a handful of potential developments, or a large number of potential developments. For example, a researcher do a deep dive into one of the developments to gather much more info, correct errors or misleading implications, write up their findings in a way tailored to whichever decision-makers are most relevant to the goal (e.g., EA community members making career decisions vs EA funders vs non-EA nuclear risk advocates vs US policymakers), and reach out to those decision-makers to discuss their findings. Or a researcher could spend 2-10 hours each on expanding and improving the info on most or all of the potential developments.
Specific actions that could be taken to tackle this project include:
- Reading more existing research, discussions, or opinions about the potential developments
- For many goals, a lot has already been written, in some cases stretching over many decades, and Will and I barely scratched the surface
- Quantitatively estimating the likelihood or likely consequences of progress towards one or more potential developments
- This could be done via making or soliciting forecasts, Fermi estimates, or more careful models
- Expert elicitation on the above points, via interviews, surveys, or convening workshops
- This could include very open-ended questions like “What potential technological developments do you think people should be thinking about, preparing for, or steering?”, somewhat open-ended questions like “Do you have any thoughts on the likelihood or likely consequences of [specific potential technological development] or which interventions would be best for steering the development and deployment of that?”, and/or rating scale questions
- I’ve been involved in designing a similar survey focused on a different cause area and would be happy to provide advice, templates, etc.
A note on information hazards
This project would likely bump up against some information hazards (especially attention hazards), as is the case with much other discussion of technological developments that might occur and might increase risks. So I would encourage people to only pursue this project if, before publishing or widely sharing their outputs, they would explicitly ask multiple non-junior members of the existential risk research community:
- whether those people think these outputs should be published/shared, and
- what edits (if any) should be made in light of information hazards.
That said, I’d guess that at least some substantial version of the outputs someone generates by doing this project could indeed ultimately be published/shared.
Feel free to reach out to me if you want me to review things, suggest people to review things, or discuss principles for handling information hazards.
Why might this research be useful?
This is one of many questions relevant to how much to prioritize nuclear risk relative to other issues, what risks and interventions to prioritize within the nuclear risk area, and how that should change in future.
I also think it’s an especially important question because I think (1) with current technologies and arsenals, even very large-scale nuclear conflict seems very unlikely to fairly directly cause existential catastrophe, but (2) various technologies could plausibly change that. As such, I expect that a significant fraction of the contribution of nuclear weapons to total existential risk comes from the chance of major risk-increasing technological developments, and I expect that many of the highest priority interventions for reducing nuclear risk may focus on steering particular technological developments. (That argument skips some steps and hence isn’t watertight; I can attempt to explain my more detailed views if people are interested. See also Shulman (2020).)
What sort of person might be a good fit for this?
This project idea is very broad and could be taken in many directions, so I think many people could work out and execute some version of it that’s well-aligned with their skills and interests. The project could also range from very deep and extensive research to taking relatively “simple” and “obvious” actions to improve the post I already wrote, so I expect that for any of a wide range of skill- and seniority-levels there’d be some version of this project that would fit well.
I think one important skill or trait will be a willingness and ability to be mindful of information hazards and the unilateralist’s curse.
It might also be helpful to have an engineering background or otherwise be a technically minded person.
Some relevant previous work
Should we try to convince/fund people outside the EA community to do this work?
I think deeper research or distillation of research on many of the specific technological developments would suit the skills and interests of many people outside the EA community (henceforth "non-EAs"), and is in fact similar to what many non-EAs are already doing. It might be worth trying to convince/fund non-EAs to do that work with a focus on the potential developments that seem most promising or where uncertainty is largest.
But I think it would be important to have an EA community member vet and extend the outputs of such work, such as by considering additional possible downside risks or considering in more detail whether and how the potential technological development may reduce the odds or severity of especially worrisome nuclear conflict scenarios. And I think it would be important to have an EA to synthesize these various outputs into bottom-line views on what this suggests about how much to prioritize nuclear risk reduction and what to prioritize within that area. This could all happen after and separately from the non-EA research, or via an EA being part of the research team working on these outputs, or via EAs reviewing and giving feedback on the work.
It also seems very feasible to contract non-EAs to handle various various tasks related to convening a workshop or designing, administering, and analyzing results from a survey, either (a) after an EA provides some of the content for these things and a clear explanation of the intended outcomes or (b) with the EA staying involved throughout this process.
- ^
Information about what various actors believe can prevent issues like some actors charging ahead due to not being aware that other actors see a given goal as net-negative, or conversely holding back from pursuing a given goal due to unfounded worries that other actors might see that goal as net-negative.