25 Years Later: Why We Still Don’t Adequately Govern the Misuse of Synthetic Biology
By C.K. @ 2025-05-16T14:14 (+11)
This is a linkpost to https://proteinstoparadigms.substack.com/p/25-years-later-why-we-still-dont
Prologue to the Blog Series “Can We Globally Govern Synthetic Biology Before Disaster?”
“We want to avoid things going very, very wrong before we take the governance of synthetic biology seriously. At the same time, the nature of synthetic biology means the bar for any proof-of-principle grounding sufficient concern about biotechnology seems very high. This is the current paradigm we are in, and it’s doomed to deliver results only in the wake of disaster.”
25 Years On: The Synthetic Biology Revolution
In 2000, research groups at Boston University and Princeton University created the very first gene regulatory circuits, ushering in the birth of synthetic biology as we know it today. However, while synthetic biology took hold in the 2000s, the field itself traces its roots to the discovery of the first genetic regulatory mechanism in 1961—a circuit in E. coli that determines the uptake of lactose or glucose depending on which is more prevalent. This meant long before 2000, there was indeed plenty of speculation about the risks from synthetic biology. However, this speculation was dominated by explication about technological possibilities juxtaposed with uncertainties about how readily weaponisable synthetic biology could be. The pivotal example was the 1975 Asilomar Conference on Recombinant DNA, where concern was primarily about the technical possibilities opened up by engineered pandemics. As Paul Berg—one of the organisers of the conference—writes, “the committee was particularly concerned that introduced genes could change normally innocuous microbes into cancer-causing agents or into human pathogens, resistant to antibiotics or able to produce dangerous toxins”.
A 1983 paper by Susan Wright and Robert Sinsheimer is notably prescient about the risks, anticipating the dual-use concern that “the distinction between ‘peaceful’ research and ‘biological weapons’ could quietly disappear”. Their concerns include agents immune to therapeutics, immune to solar radiation, and capable of transmission via unexplored vectors. Additionally, a well-cited paper from 1999 by Edgar J. DaSilva highlights the potential of genetic modification to create counter-measure resistant pathogens, enable risks from uncontrolled release, enable ethnic targeting, and incentivise proliferation due to the availability of research funding. At the domestic level, by 1999, the U.S. did not have a dedicated institution for assessing biotechnology risks. However, this is not for a lack of concern. At least into the 1980s, the U.S. continued to conduct extensive research into defences against biological weapons, spending $421m in 1985 for chemical and biological research where “many of these new military research dollars support work that uses new biotechnologies, such as recombinant DNA and hybridoma technology”. Susan Wright points to several statements, including by then Secretary of Defense Caspar Weinberger, that this research was purely defensive given the scale of harm from bioweapons, yet their unreliability for offensive weaponisation.
Additionally, 2001 was an extremely important critical juncture due to the 9/11 attacks and the 2001 ‘Amerithrax’ attacks in which anthrax letters were sent through the U.S. postal service. The anthrax attacks and 9/11 directly led to the implementation of extraordinary measures such as the passing of the Model State Emergency Health Powers Act, which provided extensive powers to the CDC and nearly $1bn in funding for state and local health departments to enhance their terrorism preparedness. On the international level, in 2002, the Australia Group substantially expanded its export control regime, adopting guidelines for the licensing of chemical and biological items and adding more rigorous controls to the exporting of fermenters, among other measures. In 2004, the United Nations Security Council adopted Resolution 1540—preventing states from proliferating weapons of mass destruction in response to these threats and potential proliferation.
At least by 2002, the threat of bioterrorism in particular was very salient. However, much of this worry concerned the weaponisation of select agents like anthrax, given relevant incidents. The threat from synthetic biology, rather, was laden with uncertainties. However, these uncertainties would not be resolved when proof-of-concepts started entering the fold.
In 2002, Eckard Wimmer and colleagues at Stony Brook University artificially synthesised poliovirus. The central method involved the manual combining of mail-ordered DNA into poliovirus synthetic DNA over months, underpinned by two developments in transcription and de novo synthesis that Wimmer helped pioneer in the 1980s and 1990s. As Wimmer notes, in principle, similar methods were applicable to all viruses, but the genetic sequence of poliovirus was known, poliovirus is among the smallest well-studied viruses, and it has a comparatively simple molecular machinery. By 2002, vaccines against poliovirus meant it was also likely not a concern for industrialised countries. This means in 2002, a well-resourced, highly capable team of molecular biologists could synthesise short and simple viruses within months, though this did not pose a significant public health threat in the countries where these labs were disproportionately based. However, it would only be a matter of time before this capability would get cheaper, more accessible, and more applicable to more dangerous pathogens.
Unsurprisingly, there wasn’t a significant international response. For at least five years or so, there is little mention of the synthesis of poliovirus at the Biological Weapons Convention; no discernible change in international governance directly responding to this experiment; and several examples of submissions at the Biological Weapons Convention discussing synthetic biology but excluding this development. Though in the U.S., this is one of the key developments that motivated a highly influential report by the National Research Council that would come to be known as the Fink Report.
The Fink Report was clear that “the Wimmer approach offers no technical advantage to a terrorist”. However, it also noted that “states, groups, and individuals are pursuing a biological weapons capability—and the means for them to do so are widely available”. While much of the world did not respond an advance that likely was just not a threat, the striking conclusion of the Fink Report was that “the ability to synthesize a poliovirus genome and recover infectious virus was regarded as a foregone conclusion”, highlighting seven types of experiments that “represent experiments that are feasible with existing knowledge and technologies or with advances that the Committee could anticipate occurring in the near future”. The Fink Report would lead to the establishment of the National Science Advisory Board for Biosecurity in the U.S., amidst other measures, due to principled concerns about the distribution of concerning capabilities. However, the synthesis of poliovirus otherwise did little to resolve all the uncertainties that had existed at least since the 1970s.
Why We Still Don’t Adequately Govern the Misuse of Synthetic Biology
I think it’s important to tell this story because in 2025, 25 years on from the very birth of synthetic biology, I don’t think a great deal has substantially changed. There have been several key developments in synthetic biology that appear to represent a shift in the distribution of capabilities for the misuse of synthetic biology since: the 2005 reconstruction of the 1918 Influenza pandemic virus; the 2011 Fouchier and Kawaoka Influenza gain-of-function experiments; and the 2017 synthesis of horsepox (published in 2018) to name a few. However, in every case, the international response was relatively limited. Technical uncertainties were not resolved, heated debates emerged on both sides of the regulatory divide, and relatively little formal international governance was implemented. The polarised debate on the governance of synthetic biology means there have indeed been several commendable forward leaps taken over the last two decades, though they have (nearly) always been met by stakeholders with rival interests constraining consensus.
This isn’t to say that there isn’t an extensive regime complex concerning itself with the governance of synthetic biology. The Wassenaar Arrangement, Global Partnership Against the Spread of Weapons and Materials of Mass Destruction, Global Health Security Initiative, International Experts Group of Biosafety and Biosecurity Regulators (IEGBBR), and of course the Biological Weapons Convention are just some of at least ~20 institutions that, at least theoretically, have a mandate that concerns the governance of synthetic biology. However, most governance is voluntary, global biodefense is lacking, and there are several gaps in the global governance of synthetic biology, such as the lack of consensus restrictions on particular types of dual-use research of concern. The lack of verification at the Biological Weapons Convention, compared to verification at the Chemical Weapons Convention or the International Atomic Energy Agency, is the quintessential example of the difficulties of governing bioweapons more generally.
In some sense, this is not unusual. It’s clear that uncertainties about the actual threat level are a key reason behind comparatively limited governance of synthetic biology globally. Pretty much all international governance was preceded by things going wrong. The governance of nuclear weapons and the emergent taboo largely developed after the U.S. strikes on Hiroshima and Nagasaki; the governance of chemical weapons followed the extensive usage of chemical weapons throughout history; and the Montreal Protocol was a reaction to the discovery of the hole in the Ozone layer in 1985. However, there is also a clear consensus about how dangerous bioweapons could be were dangerous capabilities sufficiently widely distributed. Much of the uncertainties boil down to when rather than if. It is also the case that we have seen the usage of biological weapons throughout history, limited cases of biological terrorism, and several proof-of-principle studies for manipulating synthetic biology to create dangerous pathogens.
In other words, we want to avoid things going very, very wrong before we take the governance of synthetic biology seriously. At the same time, the nature of synthetic biology means the bar for any proof-of-principle grounding sufficient concern about biotechnology seems very high. This is the current paradigm we are in, and it’s doomed to deliver results only in the wake of disaster. The best we can hope for is that early warnings are not too bad. This is not just limited to synthetic biology, but defines the challenge of a host of emerging technologies also defined by convergent incentives for increasingly powerful, general-purpose, and digitised technologies from AI to nanotechnology. I think a current failure mode in much governance of synthetic biology right now is not taking this problem seriously enough.
I am sympathetic to the importance of high tacit knowledge and practical barriers that imply the threat from bioweapons may not be so dire and imminent. However, I think not enough academic attention has been paid to what we should do about this fact given the inherent indeterminacy in ascertaining the distribution of concerning capabilities. We’re seeing model evaluations, uplift studies, and forecasting as attempts to ground sufficient concern ex ante, but an insensitivity to when proof-of-principles actually work, as well as not enough effort on lowering the bar for what gets considered an adequate proof-of-principle in the first place. I think the rise in aggressive, national security-oriented framings for governing emerging technologies, such as the importance of US primacy over China, is in part a mistaken product of these incentives. I think the current paradigm means we are downplaying the importance of measures that impose less of a trade-off in this respect, such as strategies to shape the development of synthetic biology itself (e.g. differential technology development). Finally, I suspect our heuristics for allocating resources towards governing synthetic biology are awaiting high-fidelity signals I think we’re unlikely to get until too late.
In other words, under a pure continuation of the status quo, I believe we don’t adequately govern synthetic biology before disaster—certainly on the international scale, and I believe many lessons here are also relevant for domestic governance. How bad some shock would be remains an open question. However, given the potential scale of harm from infectious disease, we should err on the side of caution.
Can We Globally Govern Synthetic Biology Before Disaster?
For these reasons and more, I hope to do three things with a blog series I’ll be publishing over the next few months:
- Make a case for the need to globally govern synthetic biology not grounded in irresolvable forecasts about the distribution of capabilities, but due to principle concerns about what capabilities could be conferred by synthetic biology, the importance of scope-sensitive disaster prevention, and the urgency of work required even on conservative timelines about the distribution of concerning capabilities.
- Flesh out some of the challenges over the last 25 years in governing synthetic biology in the first place, such as the limitations of the dual-use dilemma as a framework; the problem of indeterminate affordances in ascertaining the distribution of capabilities; and gaps in the global governance of synthetic biology—especially beyond just the Biological Weapons Convention.
- Highlight a bunch of potential interventions and strategies I am genuinely excited about in light of all the above, such as strategies to shape technological development itself and interventions that are best-placed to be scaled globally.
Spoiler: I don’t actually know whether we can adequately govern synthetic biology before disaster. This hugely depends on timelines that may be greatly confounded by developments in artificial intelligence, as well as the slowness and vicissitudes of international governance more generally. However, I certainly think it’s worth trying. I also think the need for genuinely anticipatory governance is true across a whole class of technologies, making these ideas extremely important and worth widening the conversation about. The very qualities that imbue emerging technologies with unprecedented potential for benefit may simultaneously render them ungovernable until their capacity for catastrophe is irrevocably proven, and I think this does genuinely represent a new paradigm in (particularly global) governance. In turn, I think there is plenty of scope for new ideas, interventions, and projects that get us one step closer to enduring security.
Upcoming posts for “Can We Globally Govern Synthetic Biology Before Disaster?” will first be released on my Substack every ~1-2 weeks.
SummaryBot @ 2025-05-16T14:50 (+1)
Executive summary: Despite 25 years of synthetic biology progress and recurring warnings, the world still lacks adequate international governance to prevent its misuse—primarily because high uncertainty, political disagreement, and a reactive paradigm have hindered proactive regulation; this exploratory blog series argues for anticipatory governance based on principle, not just proof-of-disaster.
Key points:
- Historical governance has been reactive, not preventive: From Asilomar in 1975 to the anthrax attacks in 2001, most major governance shifts occurred after crises, with synthetic biology largely escaping meaningful regulation despite growing capabilities and several proof-of-concept demonstrations.
- Synthetic biology’s threat remains ambiguous but plausible: Although technical barriers and tacit knowledge requirements persist, experiments like synthesizing poliovirus (2002), the 1918 flu (2005), and horsepox (2017) show it is possible to recreate or modify pathogens—yet such developments have prompted little international response.
- Existing institutions are fragmented and weakly enforced: Around 20 organizations theoretically govern synthetic biology (e.g. the Biological Weapons Convention, Wassenaar Arrangement), but most lack enforcement mechanisms, consensus on dual-use research, or verification protocols.
- The current paradigm depends on waiting for disaster: The bar for actionable proof remains too high, leaving decision-makers reluctant to impose controls without a dramatic event; this logic is flawed but persistent across other high-risk technologies like AI and nanotech.
- New governance strategies should focus on shaping development: The author urges a shift toward differential technology development and proactive, low-tradeoff interventions that don’t require high certainty about misuse timelines to be justified.
- This series aims to deepen the conversation: Future posts will explore governance challenges, critique existing frameworks (like the dual-use dilemma), and propose concrete ideas to globally govern synthetic biology before disaster strikes—though the author admits it’s uncertain whether this can be achieved in time.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.