Exaggerating the risks (Part 13: Ord on Biorisk)

By Vasco Grilo🔸 @ 2023-12-31T08:45 (+57)

This is a linkpost to https://ineffectivealtruismblog.com/2023/12/29/exaggerating-the-risks-part-13-ord-on-biorisk/

This is a crosspost for Exaggerating the risks (Part 13: Ord on Biorisk), as published by David Thorstad on 29 December 2023.

This massive democratization of technology in biological sciences … is at some level fantastic. People are very excited about it. But this has this dark side, which is that the pool of people that could include someone who has … omnicidal tendencies grows many, many times larger, thousands or millions of times larger as this technology is democratized, and you have more chance that you get one of these people with this very rare set of motivations where they’re so misanthropic as to try to cause … worldwide catastrophe.

Toby Ord, 80,000 Hours Interview

Listen to this post [there is an option for this in the original post]

1. Introduction

This is Part 13 of my series Exaggerating the risks. In this series, I look at some places where leading estimates of existential risk look to have been exaggerated.

Part 1 introduced the series. Parts 2-5 (sub-series: “Climate risk”) looked at climate risk. Parts 6-8 (sub-series: “AI risk”) looked at the Carlsmith report on power-seeking AI.

Parts 910 and 11 began a new sub-series on biorisk. In Part 9, we saw that many leading effective altruists give estimates between 1.0-3.3% for the risk of existential catastrophe from biological causes by 2100. I think these estimates are a bit too high.

Because I have had a hard time getting effective altruists to tell me directly what the threat is supposed to be, my approach was to first survey the reasons why many biosecurity experts, public health experts, and policymakers are skeptical of high levels of near-term existential biorisk. Parts 910 and 11 gave a dozen preliminary reasons for doubt, surveyed at the end of Part 11.

The second half of my approach is to show that initial arguments by effective altruists do not overcome the case for skepticism. Part 12 examined a series of risk estimates by Piers Millett and Andrew Snyder-Beattie. We saw, first, that many of these estimates are orders of magnitude lower than those returned by leading effective altruists and second, that Millett and Snyder-Beattie provide little in the way of credible support for even these estimates.

Today’s post looks at Toby Ord’s arguments in The Precipice for high levels of existential risk. Ord estimates the risk of irreversible existential catastrophe by 2100 from naturally occurring pandemics at 1/10,000, and the risk from engineered pandemics at a whopping 1/30. That is a very high number. In this post, I argue that Ord does not provide sufficient support for either of his estimates.

2. Natural pandemics

Ord begins with a discussion of natural pandemics. I don’t want to spend too much time on this issue, since Ord takes the risk of natural pandemics to be much lower than that of engineered pandemics. At the same time, it is worth asking how Ord arrives at a risk of 1/10,000.

Effective altruists effectively stress that humans have trouble understanding how large certain future-related quantities can be. For example, there might be 1020, 1050 or even 10100 future humans. However, effective altruists do not equally stress how small future-related probabilities can be. Risk probabilities can be on the order of 10-2 or even 10-5, but they can also be a great deal lower than that: for example, 10-10, 10-20, or 10-50 [for example, a terrorist attack causing human extinction is astronomically unlikely on priors].

Most events pose existential risks of this magnitude or lower, so if Ord wants us to accept that natural pandemics have a 1/10,000 chance of leading to irreversible existential catastrophe by 2100, Ord owes us a solid argument for this conclusion. It is certainly far from obvious: for example, devastating as the COVID-19 pandemic was, I don’t think anyone believes that 10,000 random re-rolls of the COVID-19 pandemic would lead to at least one existential catastrophe. The COVID-19 pandemic just was not the sort of thing to pose a meaningful threat of existential catastrophe, so if natural pandemics are meant to go beyond the threat posed by the recent COVID-19 pandemic, Ord really should tell us how they do so.

Ord begins by surveying four historical pandemics: the Plague of Justinian, Black Death, Columbian Exchange, and Spanish Flu. Ord notes that while each of these events led to substantial loss of life, most were met with surprising resilience.

Even events like these fall short of being a threat to humanity’s longterm potential. In the great bubonic plagues we saw civilization in the affected areas falter, but recover. The regional 25 to 50 percent death rate was not enough to precipitate a continent-wide collapse of civilization. It changed the relative fortunes of empires, and may have altered the course of history substantially, but if anything, it gives us reason to believe that human civilization is likely to make it through future events with similar death rates, even if they were global in scale.

I drew a similar lesson from the study of historical pandemics in Part 9 of this series.

Next, Ord notes that the fossil record suggests the historical risk of existential catastrophe from naturally occurring pandemics was low:

The strongest case against existential risk from natural pandemics is the fossil record argument from Chapter 3. Extinction risk from natural causes above 0.1 percent per century is incompatible with the evidence of how long humanity and similar species have lasted.

This accords with what we found in Part 9 of this series: the fossil record reveals only a single confirmed mammalian extinction due to disease, and that was the extinction of a species of rat in a very small and remote location (Christmas Island).

Of course, Ord notes, levels of risk from natural pandemics have changed both for the better and for the worse in recent history. On the one hand, we are more vulnerable because there are more of us, and we live in a denser and more interconnected society. On the other hand, we have excellent medicine, technology, and public health to protect us. For example, we saw in Part 10 of this series that simple non-pharmaceutical interventions in Wuhan and Hubei may have reduced cases by a factor of 67 by the end of February 2020, and that for the first time a global pandemic was ended in real-time by the development of an effective vaccine.

So far, we have seen the following: Historical pandemics suggest, if anything, surprising resilience of human civilization to highly destructive events. The fossil record suggests that disease rarely leads to mammalian extinction, and while human society has since changed in some ways that make us more vulnerable than our ancestors were, we have also changed in some ways that make us less vulnerable than our ancestors were. So far, we have been given no meaningful argument for a 1/10,000 chance of irreversible existential catastrophe from natural pandemics by 2100. Does Ord have anything in the way of a positive argument to offer?

Here is the entire remainder of Ord’s analysis of natural pandemics:

It is hard to know whether these combined effects have increased or decreased the existential risk from pandemics. This uncertainty is ultimately bad news: we were previously sitting on a powerful argument that the risk was tiny; now we are not. But note that we are not merely interested in the direction of the change, but also in the size of the change. If we take the fossil record as evidence that the risk was less than one in 2,000 per century, then to reach 1 percent per century the pandemic risk would need to be at least 20 times larger. This seems unlikely. In my view, the fossil record still provides a strong case against there being a high extinction risk from “natural” pandemics. So most of the remaining existential risk would come from the threat of permanent collapse: a pandemic severe enough to collapse civilization globally, combined with civilization turning out to be hard to re-establish or bad luck in our attempts to do so.

What is the argument here? Certainly Ord makes a welcome concession in this passage: since natural pandemics are unlikely to cause human extinction in this century, most of the risk should come from threats of civilizational collapse. But that isn’t an argument. It’s a way of setting the target that Ord needs to argue for. Why think that civilization stands a 1/10,000 risk of collapse, let alone permanent collapse without recovery, by 2100 due to natural pandemics? We really haven’t been given any substantive argument at all for this conclusion.

3. Laboratory research

Another potential biorisk is the threat posed by unintentional release of pathogens from research laboratories. Ord notes that biological research is progressing quickly:

Progress is continuing at a rapid pace. The last ten years have seen major qualitative breakthroughs, such as the use of CRISPR to efficiently insert new genetic sequences into a genome and the use of gene drives to efficiently replace populations of natural organisms in the wild with genetically modified versions. Measures of this progress suggest it is accelerating, with the cost to sequence a genome falling by a factor of 10,000 since 2007 and with publications and venture capital investment growing quickly. This progress in biotechnology seems unlikely to fizzle out soon: there are no insurmountable challenges looming; no fundamental laws blocking further developments.

That’s fair enough. But how do we get from there to a 1/30 chance of existential catastrophe?

Ord begins by discussing the advent of gain-of-function research, focusing on a Dutch researcher Ron Fouchier who passed strains of H5N1 through ferrets until it gained the ability to be transmitted between mammals. That is, by now, old news. Indeed, we saw in Part 12 of this series that the US Government commissioned in 2014 a thousand-page report on the risks and benefits of gain-of-function research. That report made no mention of existential risks of any kind: the largest casualty figure modeled in this report is 80 million.

Does Ord provide an argument to suspect that gain-of-function research could lead to existential catastrophe? Ord goes on to discuss the risks of laboratory escapes. These are, again, well-known and discussed in the mainstream literature, including the government report featured in Part 12 of this series. Ord concludes from this discussion that:

In my view, this track record of escapes shows that even BSL-4 is insufficient for working on pathogens that pose a risk of global pandemics on the scale of the 1918 flu or worse—especially if that research involves gain-of-function.

But this is simply not what is at issue: no one thinks that pandemics like the 1918 flu or COVID-19 pandemic pose a 1/30 chance of irreversible existential catastrophe by 2100. Perhaps the argument is meant to be contained in the final phrase (“1918 flu or worse“), but if this is the view, it isn’t an argument, merely a statement of Ord’s view.

Aside from a list of notable laboratory escapes, this is the end of Ord’s discussion of risks posed by unintentional release of pathogens from research laboratories. Is this discussion meant to ground a 1/30 risk of existential catastrophe by 2100? I hope not, because there is nothing in the way of new evidence in this section, and very little in the way of argument.

4. Bioweapons

The final category of biorisk discussed by Ord is the risk posed by biological weapons. Ord begins by reviewing historical bioweapons programs, including the Soviet bioweapons program as well as biowarfare by the British army in Canada in the 18th century CE, ancient biowarfare in Asia Minor in the 13th century BCE, and potential intentional spread of the Black Death by invading Mongol armies.

I also discussed the Soviet bioweapons program in Part 9 of this series, since it is the most advanced (alleged) bioweapons program of which I am aware. We saw there that a leading bioweapons expert drew the following conclusion from study of the Soviet bioweapons program:

In the 20 years of the Soviet programme, with all the caveats that we don’t fully know what the programme was, but from the best reading of what we know from the civil side of that programme, they really didn’t get that far in creating agents that actually meet all of those criteria [necessary for usefulness in biological warfare]. They got somewhere, but they didn’t get to the stage where they had a weapon that changed their overall battlefield capabilities; that would change the outcome of a war, or even a battle, over the existing weapon systems available to them.

Ord’s discussion of the Soviet bioweapons program tends rather towards omission of the difficulties posed by the program, instead playing up its dangers:

The largest program was the Soviets’. At its height it had more than a dozen clandestine labs employing 9,000 scientists to weaponize diseases ranging from plague to smallpox, anthrax and tularemia. Scientists attempted to increase the diseases’ infectivity, lethality and resistance to vaccination and treatment. They created systems for spreading the pathogens to their opponents and built up vast stockpiles, reportedly including more than 20 tons of smallpox and of plague. The program was prone to accidents, with lethal outbreaks of both smallpox and anthrax … While there is no evidence of deliberate attempts to create a pathogen to threaten the whole of humanity, the logic of deterrence or mutually assured destruction could push superpowers or rogue states in that direction.

I’m a bit disappointed by the selective use of details here. We are told all of the most frightening facts about the Soviet program: how many scientists they employed, how large their stockpiles were, and how they were prone to accidents. But we aren’t told how far they fell from their goal of creating a successful bioweapon.

Is there anything in this passage that grounds a case for genuine existential risk? Ord notes that: “While there is no evidence of deliberate attempts to create a pathogen to threaten the whole of humanity, the logic of deterrence or mutually assured destruction could push superpowers or rogue states in that direction.”. What should we make of this argument? Well, what we should do with this argument is to ask Ord for more details.

We’ve seen throughout Parts 9, 10 and 11 of this series that it is extremely difficult to engineer a pathogen which could lead to existential catastrophe. Ord seems to be claiming not only that such a pathogen could be developed in this century, but also that states may soon develop such a pathogen as a form of mutually assured destruction. Both claims need substantial argument, the latter not least because humanity already has access to a much more targeted deterrent in the form of nuclear weapons. That isn’t to say that Ord’s claim here is false, but it is to say that a single sentence won’t do. If there is a serious case to be made that states can, and soon may develop pathogens which could lead to existential catastrophe in order to deter others, that case needs to be made with the seriousness and care that it deserves.

Ord notes that historical data does not reflect substantial casualties from bioweapons. However, Ord suggests, we may have too little data to generalize from, and in any case the data suggests a “power law” distribution of fatalities that may favor high estimates of existential risk. That’s fair enough, though we saw in Part 12 that power law estimates of existential biorisk face substantial difficulties, and also that the most friendly published power law estimate puts the risks orders of magnitude lower than Ord does.

From here, Ord transitions into a discussion of the dangers posed by the democratization of biotechnology and the spread of `do-it-yourself’ science. Ord writes:

Such democratization promises to fuel a boom of entrepreneurial biotechnology. But since biotechnology can be misused to lethal effect, democratization also means proliferation. As the pool of people with access to a technique grows, so does the chance it contains someone with malign intent.

We discussed the risk of `do-it-yourself’ science in Part 10 of this series. There, we saw that a paper by David Sarapong and colleagues laments “Sensational and alarmist headlines about DiY science” which “argue that the practice could serve as a context for inducing rogue science which could potentially lead to a ‘zombie apocalypse’.” These experts find little empirical support for any such claims.

That skepticism is echoed by most leading experts and policymakers. For example, we also saw in Part 10 that a study of risks from synthetic biology by Catherine Jefferson and colleagues decries the “myths” that “synthetic biology could be used to design radically new pathogens” and “terrorists want to pursue biological weapons for high consequence, mass casualty attacks”, concluding:

Any bioterrorism attack will most likely be one using a pathogen strain with less than optimal characteristics disseminated through crude delivery methods under imperfect conditions, and the potential casualties of such an attack are likely to be much lower than the mass casualty scenarios frequently portrayed. This is not to say that speculative thinking should be discounted … however, problems arise when these speculative scenarios for the future are distorted and portrayed as scientific reality.

The experts are skeptical. Does Ord give us any reason to doubt this expert consensus? The only remaining part of Ord’s analysis is the following:

People with the motivation to wreak global destruction are mercifully rare. But they exist. Perhaps the best example is the Aum Shinrikyo cult in Japan, active between 1984 and 1995, which sought to bring about the destruction of humanity. They attracted several thousand members, including people with advanced skills in chemistry and biology. And they demonstrated that it was not mere misanthropic ideation. They launched multiple lethal attacks using VX gas and sarin gas, killing 22 people and injuring thousands. They attempted to weaponize anthrax, but did not succeed. What happens when the circle of people able to create a global pandemic becomes wide enough to include members of such a group? Or members of a terrorist organization or rogue state that could try to build an omnicidal weapon for the purposes of extortion or deterrence?

The first half of this paragraph suggests that although few sophisticated groups would want to cause an existential catastrophe, some such as Aum Shinrikyo have had that motivation. The best thing to say about this claim is that it isn’t what is needed: we were looking for an argument that advances in biotechnology will enable groups to bring about existential catastrophe, not that groups will be motivated to do so. However, we also saw in Part 2 of my series on epistemics that this claim is false: Aum Shinrikyo did not seek to “bring about the destruction of humanity,” and the falsity of this claim is clear enough from the research record that it is hard to understand why Ord would be repeating it.

The second half of this paragraph concludes with two leading questions: “What happens when the circle of people able to create a global pandemic becomes wide enough to include members of such a group? Or members of a terrorist organization or rogue state that could try to build an omnicidal weapon for the purposes of extortion or deterrence?” But questions are not arguments, and they are especially not arguments for what Ord needs to show: that the democratization of biotechnology will soon provide would-be omnicidal actors with the means to bring about existential catastrophe.

5. Governance

The chapter concludes with a discussion of some ways that biohazards might be governed, and some failures of current approaches. I don’t want to dwell on these challenges, in large part because I agree with most of them, though I would refer readers to Part 2 of my series on epistemics for specific disagreements about the tractability of progress in this area.

Ord begins by noting that since its founding, the Biological Weapons Convention (BWC) has been plagued with problems. The BWC has a minuscule staff and no effective means of monitoring or enforcing compliance. This limits the scope of international governance of biological weapons.

Ord notes that synthetic biology companies often make voluntary efforts to manage the risks posed by synthetic biology, such as screening orders for dangerous compounds. This is not surprising: theory suggests that large companies will often self-regulate as a strategy for avoiding government regulation. As Ord notes, there is some room for improvement: only about 80% of orders are screened, and future advances may make screening more difficult. That is fair enough.

Ord observes that the scientific community has also tried to self-regulate, though with mixed success.

All of this is quite reasonable, but it does not do much to bolster the fundamental case for a 1/30 risk of existential catastrophe from engineered pandemics by 2100. It might make it easier for those already convinced of the risk to see how catastrophes could fail to be prevented, but what we really need from Ord is more argument bearing on the nature and prevalence of the underlying risks.

6. Taking stock

Toby Ord claims that there is a 1/30 chance of irreversible existential catastrophe by 2100 from engineered pandemics. That is an astoundingly high number.

We saw in Parts 9-11 of this series that most experts are deeply skeptical of Ord’s claim, and that there are at least a dozen reasons to be wary. This means that we should demand especially detailed and strong arguments from Ord to overcome the case for skepticism.

Today’s post reviewed every argument, or in many cases every hint of an argument made by Ord in support of his risk estimates. We found that Ord draws largely on a range of familiar facts about biological risk which are common ground between Ord and the skeptical expert consensus. We saw that Ord gives few detailed arguments in favor of his risk estimates, and that those arguments given fall a good deal short of Ord’s argumentative burden.

We also saw that Ord estimates a 1/10,000 chance of irreversible existential catastrophe by 2100 from natural pandemics. Again, we saw that very little support is provided for this estimate.

This isn’t a situation that should sit comfortably with effective altruists. Extraordinary claims require extraordinary evidence, yet here as so often before, extraordinary claims about future risks are supported by rather less than extraordinary evidence. Much more is needed to ground high risk estimates, so we will have to look elsewhere for arguments in favor of high risk estimates.


Steven Byrnes @ 2024-01-01T15:07 (+35)

It is certainly far from obvious: for example, devastating as the COVID-19 pandemic was, I don’t think anyone believes that 10,000 random re-rolls of the COVID-19 pandemic would lead to at least one existential catastrophe. The COVID-19 pandemic just was not the sort of thing to pose a meaningful threat of existential catastrophe, so if natural pandemics are meant to go beyond the threat posed by the recent COVID-19 pandemic, Ord really should tell us how they do so.

This seems very misleading. We know that COVID-19 has <<5% IFR. Presumably the concern is that some natural pandemics may be much much more virulent than COVID-19 was. So it’s important that the thing we imagine is “10,000 random re-rolls in which there is a natural pandemic”, NOT “10,000 random re-rolls of COVID-19 in particular”. And then we can ask questions like “How many of those 10,000 natural pandemics have >50% IFR? Or >90%? And what would we expect to happen in those cases?” I don’t know what the answers are, but that’s a much more helpful starting point I think.

We discussed the risk of `do-it-yourself’ science in Part 10 of this series. There, we saw that a paper by David Sarapong and colleagues laments “Sensational and alarmist headlines about DiY science” which “argue that the practice could serve as a context for inducing rogue science which could potentially lead to a ‘zombie apocalypse’.” These experts find little empirical support for any such claims.

Maybe this is addressed in Part 10, but this paragraph seems misleading insofar as Ord is talking about risk by 2100, and a major part of the story is that DIY biology in, say, 2085 may be importantly different and more dangerous than DIY biology in 2023, because the science and tech keeps advancing and improving each year.

Needless to say, even if we could be 100% certain that DIY biology in 2085 will be super dangerous, there obviously would not be any “empirical support” for that, because 2085 hasn’t happened yet. It’s just not the kind of thing that presents empirical evidence for us to use. We have to do the best we can without it. The linked paper does not seem to discuss that issue at all, unless I missed it.

(I have a similar complaint about the the discussion of Soviet bioweapons in Section 4—running a bioweapons program with 2024 science & technology is presumably quite different than running a bioweapons program with 1985 science & technology, and running one in 2085 would be quite different yet again.

Vasco Grilo @ 2024-01-01T15:47 (+6)

Thanks, Steven!

This seems very misleading. We know that COVID-19 has <<5% IFR. Presumably the concern is that some natural pandemics may be much much more virulent than COVID-19 was. So it’s important that the thing we imagine is “10,000 random re-rolls in which there is a natural pandemic”, NOT “10,000 random re-rolls of COVID-19 in particular”. And then we can ask questions like “How many of those 10,000 natural pandemics have >50% IFR? Or >90%? And what would we expect to happen in those cases?” I don’t know what the answers are, but that’s a much more helpful starting point I think.

Yupe, I think those are the questions to ask. My interpretation of the passage you quoted is that David is saying that Toby did not address them.

Maybe this is addressed in Part 10, but this paragraph seems misleading insofar as Ord is talking about risk by 2100, and a major part of the story is that DIY biology in, say, 2085 may be importantly different and more dangerous than DIY biology in 2023, because the science and tech keeps advancing and improving each year.

Good point. My recollection is that David acknowledges that in the series, but argues that further arguments would be needed for one to update to the super high risk claimed by Toby.

Needless to say, even if we could be 100% certain that DIY biology in 2085 will be super dangerous, there obviously would not be any “empirical support” for that, because 2085 hasn’t happened yet. It’s just not the kind of thing that presents empirical evidence for us to use. We have to do the best we can without it. The linked paper does not seem to discuss that issue at all, unless I missed it.

I think empirical evidence could still inform our assessment of the risk to a certain extent. For example, one can try to see how the number of lab accidents correlates with the cost of sequencing DNA, and then extrapolate the number of accidents into the future based on decreasing sequencing cost. Toby discusses some of these matters, but the inference of the 3 % bio existential risk from 2021 to 2120 still feels very ad hoc and opaque to me.

(I have a similar complaint about the the discussion of Soviet bioweapons in Section 4—running a bioweapons program with 2024 science & technology is presumably quite different than running a bioweapons program with 1985 science & technology, and running one in 2085 would be quite different yet again.

Note safety measures (e.g. vaccines and personal protective equipment) would also improve alongside capabilities, so the net effect is not obvious. I guess risk will increase, but Toby's guess for bio existential risk appears quite high.

ClimateDoc @ 2024-01-01T16:15 (+27)

We saw in Parts 9-11 of this series that most experts are deeply skeptical of Ord’s claim


How is it being decided that "most experts" think this? I took a look and part 10 referenced two different papers with a total of 7 authors and a panel of four experts brought together by one of those authors - it doesn't seem clear to me from this that this view is representative of the majority of experts in the space.

Vasco Grilo @ 2024-01-11T05:37 (+2)

Nice point! In The Existential Risk Persuasion Tournament (XPT), domain experts forecasted the risk of an engineered pathogen causing extinction by 2100 to be 1 % (Table 3). However, it is worth noting the sample of experts may not be representative:

"The sample drew heavily from the Effective Altruism (EA) community: about 42% of experts and 9% of superforecasters reported that they had attended an EA meetup. In this report, we separately present forecasts from domain experts and
non-domain experts on each question."

Peeter Laas @ 2024-01-11T05:10 (+12)

Amazing thread, Vasco! Your post hits like a fresh blast of reason in the middle of a doomsday, conspiracy-like fringe. I think you are on the right track by addressing 'the risk hype.'

I have a Ph.D. in gene technology and have served on the bioadvisory board of an EU member state as a representative of environmental protection agencies for 13 years. To be honest, the “Biosecurity & Pandemics” topic enticed me to join the EA Forum, and I have been having a hard time understanding how this fits with EA.

There are only a few things more wasteful and frankly counterproductive to spend money on than mitigating obscure pandemic/bioweapon threats. It could turn out to be useful, but it lands smack in the middle of other very high-risk, low reward investments. For example, the US has spent something like $40-50 billion dollars since 2001 on anthrax research alone – a disease that only has a few cases in the US and a few thousand globally per year. Incredibly miserable investment-reward balance. I know that working with some random adenovirus or Mycobacterium tuberculosis doesn’t sound so sexy as running bioreactors with Y. pestis in BSL4, but would be orders of magnitude more effective for humanity.

This brings me to my second point: the incident with the Ames strain from USAMRIID in the 2001 anthrax letters perfectly illustrates the self-fulfilling prophecy generated by circulating these agents in labs/industry in order to develop countermeasures. In fact, such activities and initiatives are the main force increasing the risk of existential catastrophe imposed by these agents. Thereby, I cannot see reaching anywhere near the 1% chance of existential catastrophe from biological causes by 2100 without spreading corresponding infrastructure and agents unnecessarily. Even covert bioweapon development by nation-states is much smaller a problem to deal with.

And thirdly, I would address this notion – probably doing some heavy lifting to prop up the chances of existential catastrophe in some eyes – that any day now, some nut will self-educate on YouTube or some skilled professional with lab access will flip and construct a DIY bioweapon capable of posing a critical threat to society. I will give it some rope in terms of somebody starting that “secret project” not being too far-fetched. Can happen, people can be very weird! However, I can see difficulties even if the person gains access to free and unlimited NA printing resources. There is a reason why the Soviet Union had tons of anthrax and smallpox – you are going to need a large-scale, sophisticated delivery system for the initial release. Otherwise, the list of victims will include only the bioterrorist or close people, and it will never be more than a regional incident.

Not to mention all the DIY genetic engineering projects that people are much more likely to work on. From doing home gene therapy on a pet (or on oneself) to larger-scale synthetic biology projects to enable a yeast to synthesize the original special ingredient of the original version of Coca-Cola. Moreover, during times when environmental activism swings to many strange places, the next iteration of Ted Kaczynski can easily be a person that seeks to modify the biosphere in order to protect it from humanity – e.g., an agent that impacts fish, making them unedible for people. Or plastic-degrading bacteria released into the ocean.

Don't get me wrong - I do think we have to prepare and stay vigilant. There will be a new pandemic - only question is when and which agent. Hopefully we don't manifest it artificially out of fear. And way to go, Vasco, for being the tip of the spear to unmask delusion of grandeur behind of this 'risk reasoning'.

David Thorstad @ 2024-01-17T06:25 (+4)

Thanks Peter!

I wonder if you'd be willing to be a bit more vocal about this. For example, the second most upvoted comment (27 karma right now) takes me to task for saying that "most experts are deeply skeptical of Ord’s claim"  (1/30 existential biorisk in the next 100 years).

I take that to be uncontroversial. Would you be willing to say so?
 

JWS @ 2024-01-17T10:36 (+8)

David, as someone who's generally a big fan of your work, it's kind of on you to provide evidence that most experts are 'deeply skeptical' of Ord's claim. And here's the thing, you might not even be wrong about it! But your level of confidence in this claim is 'uncontroversial', and yet the evidence you provide does not match it. I find it strange/disappointing that you don't address this, given that it's a common theme on your blog that EAs often make overconfident claims.

For example, in Part 10 of 'Exaggerating the risks' you evidence for the claim of 'most experts' is only:

  • "A group of health researchers from King’s College"
  • "an expert panel on risks posed by bioweapons" convened by one of the above researchers
  • "David Sarpong and colleagues"

Which you then use to conclude "Experts widely believe that existential biorisk in this century is quite low. The rest of us could do worse than to follow their example."  But you haven't argued for this. What's the numerator and denominator here? How are you so certain without calculating the proportion? What does 'widely believe' mean? Doesn't Ord also think existential biorisk is 'quite low'? 3.33% makes sense as 'quite low' to me, maybe you mean 'exceedinly low'/'vanishingly small chance' or something like that instead?

Then, in part 11, you appeal to how in the recent XPT study Superforecasters reduced their median risk in existential risk from bio to 0.1% to 0.01%, but you don't mention in the same study[1] that domain experts increased their x-risk on the same question from 0.7% to 1%. So in this study when the "experts" don't match your viewpoint, you suddenly only mention the non-experts and decline to mention that expert consensus moved in the opposite direction to what you expect, or your case expects. And even then, a 1% vs 3.33% difference in subjective risk estimation doesn't sound like a gap that merits describing a 'deep scepticism' of the latter from the former to me.

I like your work, and I think that you successfully 'kicked the tires' on the Aum Shinrikyo case present in The Precipice, for example. But you conclude this mini-series in part 11 by saying this:

"But experts are largely unconvinced that there is a serious risk of large-scale biological attacks, particularly on a scale that could lead to existential catastrophe."

But it also turns out, from what I can tell, that most EAs don't think so either! So maybe you're just going after Ord here, but then again I think that ~1% v 3.33% estimation of risk doesn't seem as big a difference as you claim. But I don't think that's what you're restricting your claims to, since you also mention 'many leading effective altruists' and also use this to push-back on your perceived issue with how EAs treat 'expert' evidence, for example. But much like your critiques of EA xrisk work, I think you continually fail to produce either arguments, or good arguments, for this particular claim that can justify the strength of your position.

  1. ^

    Page 66 on the pdf

ClimateDoc @ 2024-01-19T22:49 (+7)

the second most upvoted comment (27 karma right now) takes me to task for saying that "most experts are deeply skeptical of Ord’s claim"  (1/30 existential biorisk in the next 100 years).

I take that to be uncontroversial. Would you be willing to say so?

 

I asked because I'm interested - what makes you think most experts don't think biorisk is such a big threat, beyond a couple of papers?

Jeff Kaufman @ 2024-01-15T20:35 (+4)

I'm glad you're bringing your expertise into this area, thanks for jumping in! Reading your comment, however, it sounds to me like you're responding as if EAs concerned about biosecurity are advocating work that's pretty different than what we actually are? Some examples:

the US has spent something like $40-50 billion dollars since 2001 on anthrax research alone

I don't know any EAs who think this has been a good use of funds. Biosecurity isn't a knob that we turn between 'less' and 'more', it's a broad field where we can try to discover and fund the best interventions. To make an analogy to global health and development, if we learn that funding for textbooks in low income countries has generally been very low impact (say, because of issues with absenteeism, nutrition, etc) that isn't very relevant when deciding whether to distribute anti-malarial nets.

running bioreactors with Y. pestis in BSL4, but would be orders of magnitude more effective for humanity.

I don't think any EAs are doing this kind of work, and the ones I've talked to generally think this is harmful and should stop.

the incident with the Ames strain from USAMRIID in the 2001 anthrax letters perfectly illustrates the self-fulfilling prophecy generated by circulating these agents in labs/industry in order to develop countermeasures. In fact, such activities and initiatives are the main force increasing the risk of existential catastrophe imposed by these agents.

This is another thing that EAs don't do, and generally don't think others should do.

Later, you do get into areas where EAs do work. For example:

that any day now, some nut will self-educate on YouTube or some skilled professional with lab access will flip and construct a DIY bioweapon capable of posing a critical threat to society.

Yes, this is a real concern for many of us. I wrote a case for it recently in the post Out-of-distribution Bioattacks.

There is a reason why the Soviet Union had tons of anthrax and smallpox – you are going to need a large-scale, sophisticated delivery system for the initial release. Otherwise, the list of victims will include only the bioterrorist or close people, and it will never be more than a regional incident.

This seems to miss one of the main reasons that biological attacks are risky: contagion. With a contagious pathogen you can infect almost the whole world with only a small amount of seeding. This gives two main patterns, 'wildfire' pandemics (ex: a worse Ebola) which are obvious but so contagious that they're extremely challenging to stop, and 'stealth' pandemics (ex: a worse HIV) that first infect many people and only much later cause massive harm. See Securing Civilisation Against Catastrophic Pandemics.

Happy to get into any of this more!

Vasco Grilo @ 2024-01-15T21:46 (+2)

Thanks for the clarifications, Jeff!

Biosecurity isn't a knob that we turn between 'less' and 'more', it's a broad field where we can try to discover and fund the best interventions. To make an analogy to global health and development, if we learn that funding for textbooks in low income countries has generally been very low impact (say, because of issues with absenteeism, nutrition, etc) that isn't very relevant when deciding whether to distribute anti-malarial nets.

I wonder how much interventions in biosecurity differ in their cost-effectiveness. From Ben Todd's related in-depth analysis, which I should note does not look into biosecurity interventions:

Overall, I roughly estimate that the most effective measurable interventions in an area are usually around 3–10 times more cost effective than the mean of measurable interventions (where the mean is the expected effectiveness you’d get from picking randomly). If you also include interventions whose effectiveness can’t be measured in advance, then I’d expect the spread to be larger by another factor of 2–10, though it’s hard to say how the results would generalise to areas without data.

The above suggests that, in a given area, the most effective interventions are 24.5 (= (3*10)^0.5*(2*10)^0.5) times as cost-effectivene as randomly selected ones. For education in low income countries, the ratio is around 20. These ratios are not super large, so there is a sense in which knowing about the cost-effectiveness of a bunch of random interventions in a given area could inform us about the cost-effectiveness of the best ones.

Yet, it might be the case that the anthrax research is much worse than a random biosecurity intervention, despite the large investment. If so, the best biosecurity interventions could still easily be orders of magnitude more cost-effective.

Jeff Kaufman @ 2024-01-15T22:00 (+4)

it might be the case that the anthrax research is much worse than a random biosecurity intervention

I think many biosecurity interventions have historically made us less safe, likely including the anthrax research, and probably also including the median intervention. So an analysis that works by scaling the cost effectiveness of a random intervention doesn't look so good!

Vasco Grilo @ 2024-01-11T07:11 (+2)

Amazing thread, Vasco!

Thanks, Peeter!

Your post hits like a fresh blast of reason in the middle of a doomsday, conspiracy-like fringe. I think you are on the right track by addressing 'the risk hype.'

Stay tuned for more! Just one note. While I do think people in the effective altruism community have often been exaggerating the risks, I would say using terms like "conspiracy", "fringe" and "hype" tends to increase the chance of adversarial rather than constructive discussions, and therefore can be counterproductive. Yet, I appreciate you sharing your honest feelings.

To be honest, the “Biosecurity & Pandemics” topic enticed me to join the EA Forum, and I have been having a hard time understanding how this fits with EA.

I am glad you joined. I think people with expertise in bio who have not been exposed to effective altruism since their early age may have different takes which are worth listening to. You can check 80,000 Hours' profile on preventing catastrophic pandemics for an overview of why it is a top cause area in EA. If you see yourself disagreeing with many points, and would like a side project, you can then consider sharing your thoughts in a post.

There are only a few things more wasteful and frankly counterproductive to spend money on than mitigating obscure pandemic/bioweapon threats.

This may not apply in all cases. Charity Entrepeneurship has estimated that advocating for academic guidelines to limit dual-use research of concern (DURC) can save a life for just 30 $ (related post), which is around 200 (= (5*10^3)/30) times as cost-effective as GiveWell's top charities (often consided the best interventions in global health and development).

For example, the US has spent something like $40-50 billion dollars since 2001 on anthrax research alone – a disease that only has a few cases in the US and a few thousand globally per year.

That does look like a bad investment. Considering the value of a statistical life (VSL) used by the Federal Emergency Management Agency (FEMA) of 7.5 M$, an investment of 45 G$ (= (40 + 50)/2*10^9) would have to save 6.00 k (= 45*10^9/(7.5*10^6)) lives in the US in expectation, i.e. 261 per year (= 6.00*10^3/23) over the 23 years from 2001 to 2023. This looks like a high bar considering that anthrax is not contagious, which limits the probability of having lots of deaths.

That being said, it is worth noting the deaths from pandemics are very heavy-tailed in general, so the actual cost-effectiveness is not a good proxy for the expected cost-effectiveness, which is what one should care about. I can imagine investments in mRNA vaccines also saved few lives before Covid-19, but their expected cost-effectiveness was driven by relatively rare events, which in this case did happen.

the incident with the Ames strain from USAMRIID in the 2001 anthrax letters

For reference, Peter is referring to the 2001 anthrax attacks. People may also want to check Wikipedia's list of bioterrorist attacks. As a side note, I think your comment would benefit from having a few links, but I appreciate this takes time!

And thirdly, I would address this notion – probably doing some heavy lifting to prop up the chances of existential catastrophe in some eyes – that any day now, some nut will self-educate on YouTube or some skilled professional with lab access will flip and construct a DIY bioweapon capable of posing a critical threat to society. I will give it some rope in terms of somebody starting that “secret project” not being too far-fetched. Can happen, people can be very weird! However, I can see difficulties even if the person gains access to free and unlimited NA printing resources. There is a reason why the Soviet Union had tons of anthrax and smallpox – you are going to need a large-scale, sophisticated delivery system for the initial release. Otherwise, the list of victims will include only the bioterrorist or close people, and it will never be more than a regional incident.

Relatedly, I enjoyed listening to Sonia Ben Ouagrham-Gormley on Barriers to Bioweapons, which I plan to linkpost on the EA Forum in the coming weeks.

SummaryBot @ 2024-01-02T15:58 (+5)

Executive summary: The author argues that Toby Ord fails to provide sufficient evidence to support his high estimates of 1/10,000 and 1/30 chance of existential catastrophe from natural or engineered pandemics, respectively, by 2100.

Key points:

  1. Ord provides little evidence that natural pandemics pose a 1/10,000 existential risk, especially as historical pandemics showed civilization's resilience. Fossil records also show disease rarely causes mammalian extinctions.
  2. Ord argues gain-of-function research could enable engineered pathogens threatening humanity, but gives no detailed case for how this leads to a 1/30 existential risk.
  3. Ord claims states may develop catastrophic pathogens as deterrents, but this claim needs much more substantiation given the challenges of engineering such pathogens.
  4. Ord suggests democratized biotech could empower malign actors, but most experts are skeptical biotech enables engineering radically new or catastrophic pathogens. Ord provides no counterarguments.
  5. Governance issues may exacerbate risks, but don't directly support Ord's specific risk estimates.
  6. Ord's extraordinary claims require extraordinary evidence which is lacking. More is needed to justify his high risk estimates over expert skepticism.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

calebp @ 2023-12-31T09:54 (+5)

I was a bit disappointed by this post. I think I am sympathetic to the overall take and I’m a bit frustrated that many EAs are promoting or directly working on biorisk without imo compelling reports to suggest a relatively high chance of x-risk.

That said, this post seems to basically make the same error, it says that Ord’s estimates are extremely high but doesn’t really justify that claim or suggest a different estimate. It would be much more reasonable imo to say “Ord’s estimate is much higher than my own prior, and I didn’t see enough evidence to justify such a large update”.

JoshuaBlake @ 2024-01-01T15:52 (+8)

It would be much more reasonable imo to say “Ord’s estimate is much higher than my own prior, and I didn’t see enough evidence to justify such a large update”.

Except the use of Bayesian language, how is that different to the following passage?

We saw in Parts 9-11 of this series that most experts are deeply skeptical of Ord’s claim, and that there are at least a dozen reasons to be wary. This means that we should demand especially detailed and strong arguments from Ord to overcome the case for skepticism.

calebp @ 2024-01-01T17:11 (+9)

Thanks for pointing that out. I re-read the post and now think that the OP was more reasonable. I'm sorry I missed that in the first place. I also didn't convey the more important message of "thank you for critiquing large, thorny, and important conclusions". Thinking about P(bio x-risk) is really quite hard relative to lots of research reports posted on the forum, and this kind of work seems important.

I don't care about the use of Bayesian language (or at least I think that bit you quoted does all the Bayesian language stuff I care about).

Maybe I should read the post again more carefully, but the thing I was trying to communicate was that I don't understand why he thinks that Ord's estimates are unreasonable, and I don't think he provided much evidence that Ord had not already accounted for in his estimate. It may have just been because I was jumping in halfway through a sequence - or because I didn't fully understand the post.

The thing I would have liked to see was something like:

  1. Here is my (somewhat) uninformed prior of P(bio x-risk) and why I think it's reasonable
  2. Here are a bunch of arguments that should cause updates from my prior
  3. Here is my actual P(bio x-risk)
  4. This seems much lower than Ord's

or

  1. Here is how Ord did his estimate
  2. Here are the specific methodological issues or ways he interpreted the evidence correctly
  3. Here is my new estimate after updating on the evidence correctly

or

  1. Here is how Ord did his estimate
  2. I don't think that Ord took into account evidence a,b and c
  3. Here is how I would update on a, b and c
  4. Here is my final estimate (see that it is much lower than Ord's)

On reflection, I think this is an unreasonable bar or ask, and in any case, I expect to be more satisfied by David's sequence on his site.

David Thorstad @ 2023-12-31T20:18 (+6)

Thanks Caleb! I give reasons for skepticism about high levels of existential biorisk in Parts 9-11 of this series.

Larks @ 2024-01-01T17:19 (+4)

In the 20 years of the Soviet programme, with all the caveats that we don’t fully know what the programme was, but from the best reading of what we know from the civil side of that programme, they really didn’t get that far in creating agents that actually meet all of those criteria [necessary for usefulness in biological warfare]. They got somewhere, but they didn’t get to the stage where they had a weapon that changed their overall battlefield capabilities; that would change the outcome of a war, or even a battle, over the existing weapon systems available to them.

Presumably the most dangerous aspects would also have been kept the most secret, so I'm not really sure how much we should update from this.