Out-of-distribution Bioattacks

By Jeff Kaufman @ 2023-12-02T12:20 (+70)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
Vasco Grilo @ 2023-12-02T13:52 (+13)

Thanks for writing this, and mentioning my related post, Jeff!

The technological change is the continuing decrease in the knowledge, talent, motivation, and resources necessary to create a globally catastrophic pandemic.

I think this depends on how fast safety measures like the ones you mentioned are adopted, and how the offense-defence balance evolves with technological progress. It would be great if Open Phil released the results of their efforts to quantify biorisk, whose one of the aims was:

  • Enumerating possible ‘phase transitions’ that would cause a radical departure from relevant historical base rates, e.g. total collapse of the taboo on biological weapons, such that they become a normal part of military doctrine.

Update on December 3. There plans plans to publish the results:

I worked on a project for Open Phil quantifying the likely number of terrorist groups pursuing bioweapons over the next 30 years, but didn't look specifically at attack magnitudes (I appreciate the push to get a public-facing version of the report published - I'm on it!).

titotal @ 2023-12-02T17:05 (+5)

One interesting implication of this theory is that the spread of strict utilitarian philosophies would be a contributing factor to existential risk. The more people are willing to bite utilitarian bullets, the more likely it is that one will bite the "kill everyone" bullet. 

This would make the EA movement potentially existentially dangerous. Even if we don't agree with the human extinction radicals, people might split off from the movement and end up supporting it. One interpretation of the FTX affair was that it was a case of seemingly EA aligned people splitting off to do unethical things justified by utilitarian math.  

ParetoPrinciple @ 2023-12-02T19:59 (+12)

One interesting implication of this theory is that the spread of strict utilitarian philosophies would be a contributing factor to existential risk. The more people are willing to bite utilitarian bullets, the more likely it is that one will bite the "kill everyone" bullet.

Can you go into more detail about this? Utilitarians and other people with logically/intellectually precise worldviews seem to be pretty consistently against human extinction; whereas average people with foggy worldviews tend to randomly flip in various directions depending on what hot takes they've recently read.

Even if we don't agree with the human extinction radicals, people might split off from the movement and end up supporting it.

Most human extinction radicals seem to emerge completely seperate from the EA movement and never intersect with it, e.g. AI scientists who believe in human extinction. If people like Tomasik or hÉigeartaigh ever end up pro-extinction, it's probably because they recently did a calculation that flipped them to prioritize s-risk over x-risk, but sign uncertainty and error bars remain more than sufficiently wide to keep them in their network with their EV-focused friends (at minimum, due to the obvious possibility of doing another calculation that flips them right back).

One interpretation of the FTX affair was that it was a case of seemingly EA aligned people splitting off to do unethical things justified by utilitarian math.

Wasn't the default explanation that SBF/FTX had a purity spiral with no checks and balances, and combined with the high uncertainty of crypto trading, SBF became psychologically predisposed to betting all of EA on his career instead of betting his career on all of EA? Powerful people tend to become power seeking and that's a pretty solid prior in most cases.

titotal @ 2023-12-03T01:29 (+4)

Can you go into more detail about this? Utilitarians and other people with logically/intellectually precise worldviews seem to be pretty consistently against human extinction; whereas average people with foggy worldviews tend to randomly flip in various directions depending on what hot takes they've recently read.

Foggy worldviews tend to flip people around based on raw emotions, tribalism, nationalism, etc. None of these are likely to get you to the position "I should implement a long term machievellian scheme to kill every human being on the planet". The obvious point being that "every human on the planet" includes ones family, friends, and country, so almost anyone operating on emotions will not pursue such a goal. 

On the other hand, utilitarian math can get to "kill all humans" in several ways, just by messing around with different assumptions and factual beliefs. Of course, I don't agree with those calculations, but someone else might. If we convince everyone on earth that the correct thing to do is "follow the math", or "shut up and calculate", then some subset of them will have the wrong assumptions, or incorrect beliefs, or just mess up the math, and conclude that they have a moral obligation to kill everyone. 

trevor1 @ 2023-12-02T16:58 (+5)

Upvoted. I'm really glad that people like you are thinking about this.

Something that people often miss with bioattacks is the economic dimension. After the 2008 financial crisis, economic failure/collapse became perhaps the #1 goalpost of the US-China conflict

It's even debatable whether the 2008 financial crisis was the cause of the entire US China conflict (e.g. lots of people in DC and Beijing would put the odds at >60% that >50% of the current US-China conflict was caused by the 2008 recession alone, in contrast to other variables like the emergence of unpredictable changes in cybersecurity).

Unlike conventional war e.g. over Taiwan and cyberattacks, economic downturns have massive and clear effects on the balance of power between the US and China, with very little risk of a pyrrhic victory (I don't currently know how this compares to things like cognitive warfare which also yield high-stakes victories and defeats that are hard to distinguish from natural causes).

Notably, the imperative to cause massive economic damage, rather than destroy the country itself, allows attackers to ratchet down the lethality as far as they want, so long as it's enough to cause lockdowns which cause economic damage (maybe mass IQ reduction or other brain effects could achieve this instead). 

GOF research is filled with people who spent >5 years deeply immersed in a medical perspective e.g. virology, so it seems fairly likely to me that GOF researchers will think about the wider variety of capabilities of bioattacks, rather than inflexibly sticking to the bodycount-maximizing mindset of the Cold War.

I think that due to disorganization and compartmentalization within intelligence agencies, as well as unclear patterns of emergence and decay of competent groups of competent people, it's actually more likely that easier-access biological attacks would first be caused by radicals with privileged access within state agencies or state-adjacent organizations (like Booz Allen Hamilton, or the Internet Research Agency which was accused of interfering with the 2016 election on behalf of the Russian government). 

These radicals might incorrectly (or even correctly) predict that their country is a sinking ship and that they only way out is to personally change the balance of power; theoretically, they could even correctly predict that they are the only ones left competent enough to do this before it's too late.