“Governability-By-Design”: Ponderings on Why We Haven’t Died From Nuclear Catastrophe (And What We Can Learn From This)
By C.K. @ 2025-08-19T18:20 (+4)
This is a linkpost to https://proteinstoparadigms.substack.com/p/governability-by-design-ponderings
Quick(-ish) Takes #1
A Happy, Nuclear Accident?
Naturally occurring uranium consists primarily of three isotopes: uranium-238 (~99.3%), uranium-235 (~0.7%), and uranium-234 (~0.005%). Of these, uranium-235 is the only isotope that can readily undergo nuclear fission with thermal (slow) neutrons under typical reactor conditions. Therefore, to use uranium as nuclear fuel, it must first be enriched. Different levels of enrichment are required for different functions. Low-enriched uranium, containing about 3–5% U-235, is typically used in commercial nuclear power reactors. Research reactors may use enrichments up to 20% U-235 (highly enriched uranium). Weapons-grade uranium usually refers to enrichment levels above ~90% U-235, which is the level necessary for creating nuclear weapons.
Enrichment is an arduous and expensive process. The most common method involves gas centrifuges, in which uranium hexafluoride gas is spun at very high speeds to separate the slightly lighter U-235 molecules from the heavier U-238. Thousands of centrifuges must be linked in cascades to achieve useful enrichment levels. Yet, even though enrichment remains taxing, U-235 is still the preferred choice of nuclear fuel for nuclear weapons. There are several reasons why. Plutonium-239, the main alternative, is far more difficult to handle due to the presence of plutonium-240, which emits spontaneous neutrons that risk pre-detonating the chain reaction. Other candidates, such as U-233 derived from thorium-232, are fissile in principle but are always accompanied by U-232, whose strong gamma emissions make weaponisation hazardous and conspicuous. Exotic options like neptunium or americium isotopes are theoretically fissile, but are prohibitively difficult to produce in quantity. U-235 remains naturally accessible, reliably fissile, and can be employed in comparatively simple weapon designs.
A seemingly inadvertent benefit of U-235, however, is that the substantially different enrichment demand between highly enriched uranium (~20%) and weapons-grade uranium (above 90%) is also the foundation of nuclear governance and verification. The International Atomic Energy Agency (IAEA), for instance, designates 20% enrichment as the threshold at which uranium becomes a direct-use material, making it subject to the highest safeguards. The 2015 Joint Comprehensive Plan of Action with Iran was built on this principle, restricting enrichment to 3.67% and limiting stockpiles to extend the time required to reach weapons-grade levels. Nuclear verification itself involves looking for industrial capability to create weapons-grade uranium or the very stockpiles of highly enriched uranium. This is especially important given the beneficial uses of uranium, particularly for nuclear energy.
Insofar as uranium is a dual-use material, the two uses have very obvious markers due to this enrichment requirement. Given the fact that U-235 remains an obvious choice for weapons-grade uranium, it is unlikely that those on the Manhattan Project foresaw the benefits that this property of uranium would have for the global governance of nuclear weapons. However, this is a quintessential example of what I’d call “governability-by-design”. In the development of nuclear weapons, the differentiability of uranium required for energy and weapons was a central property that facilitated governance.
Steering Technology Towards Governability
For me, a fascinating question is whether we could have steered the Manhattan Project to still rely on U-235 if we instead lived in a world where U-235 was the most abundant isotope of uranium, or reactor designs favoured the use of plutonium-240 or thorium-232. In such counterfactuals, the boundary between civilian and military applications might have been far blurrier. Verification would likely have been harder, or at the very least, required much more detailed criteria and intrusive inspections. Governance regimes would not be able to use differential enrichment requirements as a core organising principle, and the very possibility of monitoring fissile material would look much weaker. The proliferation of weapons-grade nuclear fuel would’ve certainly lowered the costs for states becoming nuclear powers, and the spread of nuclear powers may have increased the probability of a nuclear war by heightening Cold War tensions or simply increasing the already frighteningly high likelihood of a nuclear accident.
It seems like a happy accident that we ended up in a world where the primary nuclear fuel for nuclear weapons happens to be well-suited to the governance of dual-use technologies because of its differentiability. I have no idea whether this significantly prevented nuclear war, whether these considerations were indeed foreseen, or whether there is indeed very little contingency. It’s plausible that much of the effect here is due to the efforts of the nuclear governance regime, which would have figured out a solution for governing nuclear weapons regardless of the properties of nuclear fuel in question. However, it does seem that the ability to shape the development of nuclear weapons towards safety is precisely the type of lever that defines much work in the governance of AI, biotechnologies, and other emerging technologies today.
I’ve been thinking about differential technological development, risk-sensitive innovation, directed technological development, and other related concepts to achieve the same goal: steering technological development towards safer forms. Often this looks like accelerating defensive technologies, such as bolstering early detection capabilities to deter and mitigate risks from dual-use biotechnologies that could be used to produce bioweapons. Or, this also looks like “safety-by-design”, such as ensuring LLM reasoning steps remain transparent and the use of RLHF to finetune models towards safety.
However, I think governability-by-design might be an especially neglected modality. In some sense, we see the same lucky patterns for the development of AI. Scaling laws mean that frontier models, at least so far, are few in number and centralised among a few tech giants. This feature makes the diffusion of governance norms much quicker. Natural language, as the modality through which explosive progress in AI has happened, has enabled much more transparent governance, e.g., by allowing us to observe the reasoning steps of LLMs. On the other hand, I think much of why biotechnologies have been so hard to govern are down to key features of the properties of much biotechnological development. To the extent that future developments and new emerging technologies could significantly shape how easy it is to govern them, I think the critical question is not merely whether technologies can be steered towards safety, but whether technological development can be steered towards governability.
Tons of examples look like the role of U-235 in nuclear enrichment. A genuine example of governability-by-design seems to be public-key cryptography, where the use of one-way mathematical functions for encryption does not necessarily guarantee additional confidentiality, but certainly guarantees additional security, e.g., by making verification easy yet forgery nigh-impossible, enabling the widespread use of digital certificates. In telecommunications, the fact that radio waves require finite frequency bands creates scarcity and interference risks, which force licensing regimes and international coordination. It seems plausible that variance in governance outcomes for AI and biotechnologies also comes down to governability. Nucleic acid assemblers have been subject to export control regimes, unlike some traditional oligonucleotide synthesis platforms, in part because it is easier to verify and regulate integrated, singular, artefacts than platforms that involve manual steps and may consist of several multipurpose tools.
Limits and Leverage
One obvious problem with the idea of “governability-by-design” is that, prima facie, the capability to steer technological development towards governability implies the capability to steer technological development towards safety. If we had the power to convince the Manhattan Project to use U-235 in a thorium-232 world, why wouldn’t we just convince them to stop? Or build weaker nuclear weapons? Or accelerate nuclear defense capabilities first?
However, it’s not obvious to me that it’s commensurately easy to steer technologies towards safety rather than governability. For one, governability has the benefit that the developers of emerging technologies are then poised to become part of the governance regime. Yes, this risks regulatory capture. But it also mitigates risks from misaligned incentives that lead to the usual coordination problems (e.g., the unilateralist’s curse, race dynamics, etc.). It also seems like governability-by-design, on average, is more leveraged. The average intervention that focuses on safety-by-design or differential defensive acceleration alone is one with a marginal safety gain, compared to attempting to ensure that technology is developed in a way that opens the floodgates for several regulatory, verification-centred, or safety-by-design interventions.
A bigger problem, however, is that it might be really hard. It’s pretty tough to forecast the effects of technological properties on how they will be governed for several reasons. There are all the usual difficulties of forecasting, e.g., a lack of reference classes to develop good base rates. There’s also a tricky conceptual problem here for ascertaining which technological properties are significantly shaping governance. The nuclear case is clear ex post once we observed how governance emerged. However, it’s non-trivial that this is the most sensible chokepoint ex ante, especially given that much of what is meant by governability necessarily turns on sociopolitical factors. For example, enrichment facilities are particularly well-suited to geospatial intelligence (GEOINT) due to requirements for lots of centrifuges and uniquely high energy demands. This feature makes them much more well-suited for satellite-based verification than manufacturing the parts for warheads, which are much less differentiable from traditional manufacturing sites. However, how could the Manhattan Project have predicted the launch of the first satellite in 1957? Or that the Cold War would lead to the explosion of reconnaissance satellites? Can we predict which adjacent technological developments make particular modalities of AI development especially well-suited to governance down the line?
However, the fact that it’s hard does not necessarily make it an unworthy endeavour. Much of the challenge here is treating governance regimes as a mediating variable for technological outcomes, rather than technological capabilities alone. This injects all the usual mess of predicting, classifying, and steering social behaviour. However, the history of technological development reveals that this could be a powerful lever for governing emerging technologies moving forward. If the role of U-235 in facilitating nuclear governance was indeed a happy accident, then in another world, the Manhattan Project might have resulted in doom. In this one, it left us not only with the bomb, but with a template for how emerging technologies might be governed.
SummaryBot @ 2025-08-20T14:51 (+2)
Executive summary: This exploratory post argues that a “happy accident” in the physics of uranium-235 made nuclear weapons unusually governable, and suggests we should consider how to design emerging technologies—like AI and biotech—for similar “governability-by-design,” even though forecasting which features enable governance is difficult and uncertain.
Key points:
- Uranium-235’s enrichment requirements created a natural distinction between civilian and military use, enabling global nuclear governance through safeguards and verification—an outcome likely unforeseen during the Manhattan Project.
- Counterfactuals (e.g., if plutonium or thorium were the main fuel) show how much harder nuclear governance could have been, implying proliferation and accident risks might have been far higher.
- “Governability-by-design” differs from “safety-by-design”: instead of directly reducing risks, it makes oversight and regulation easier, often yielding more leverage by enabling multiple safety interventions downstream.
- Current analogies include: AI scaling laws concentrating power in a few firms (easier to regulate), natural language models allowing transparency, and biotech platforms differing in how easily they can be monitored or restricted.
- Other historical examples include public-key cryptography (easy verification) and radio frequency licensing (scarcity forcing coordination).
- Forecasting governability features ex ante is very challenging due to technological and sociopolitical uncertainties, but the nuclear case shows that such features can profoundly shape whether humanity survives emerging risks.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Kestrel🔸 @ 2025-08-20T06:11 (+2)
Thanks for the post!
There are ongoing conversations in the nuclear forensics community as to the role of commercial Lithium enrichment in nuclear fusion development versus the proliferation of material for building hydrogen bombs. So many of the things you talk about aren't relics of history - they're live issues, today.