How much is reducing catastrophic and extinction risk worth, assuming XPT forecasts?

By rosehadshar @ 2023-07-24T15:16 (+51)

This is a post I drafted some months ago, in the course of analysing some XPT data and reading Shulman and Thornley. It’s not very sophisticated, I haven’t checked the workings, and I haven’t polished the language; but I’m posting anyway because that seems better than not posting. Note that it’s a personal take and doesn’t represent FRI’s views.

Thanks to Josh Rosenberg at FRI and Elliot Thornley for help and comments.

 

BLUF: if you make a bunch of assumptions, then even quite low absolute risk forecasts like the XPT ones imply quite high spending on reducing GCRs, conditional on there being sufficiently cost-effective ways to do so.

In 2022, what has become the Forecasting Research Institute ran the Existential Risk Persuasion Tournament (XPT). Over 200 forecasters, including superforecasters and domain experts, spent 4 months making forecasts on various questions related to existential and catastrophic risk.

You can see the results from the tournament overall here, and a discussion of the XPT AI risk forecasts in particular here.

These are the main XPT forecasts on catastrophic and extinction risk:

 

2030

2050

2100

Catastrophic risk (>10% of humans die in 5 years)

Biological[1]

-

-

1.8%

Engineered pathogens[2]

-

-

0.8%

Natural pathogens[3]

-

-

1%

AI (superforecasters)

0.01%

0.73%

2.13%

AI (domain experts)

0.35%

5%

12%

Nuclear

0.50%

1.83%

4%

Non-anthropogenic

0.0026%

0.015%

0.05%

Total catastrophic risk[4]

0.85%

3.85%

9.05%

Extinction risk (human population <5000)

Biological[5]

-

-

0.012%

Engineered pathogens[6]

-

-

0.01%

Natural pathogens[7]

-

-

0.0018%

AI (superforecasters)

0.0001%

0.03%

0.38%

AI (domain experts)

0.02%

1.1%

3%

Nuclear

0.001%

0.01%

0.074%

Non-anthropogenic

0.0004%

0.0014%

0.0043%

Total extinction risk[8]

0.01%

0.3%

1%

If we take these numbers at face value, how much is catastrophic and extinction risk reduction worth?

One approach is to take the XPT forecasts, convert them into deaths in expectation, then assume a value of a statistical life and a discount rate, and estimate how much averting those deaths is ‘worth’. (I’m stealing this method directly from Shulman and Thornley.)

Using the XPT superforecasts and OWID population projections gives us the following deaths in expectation, in millions:

 

Deaths in expectation (millions)

 

2030

2050

2100

Catastrophic risk (>10% of humans die in 5 years)

Bio

--

--

18.6

AI

0.09

7.1

22.0

Nuclear

4.3

17.8

41.4

Non-anthropogenic

0.02

0.15

0.5

Total

7.3

37.4

93.7

Extinction risk (human population <5000)

Bio

--

--

1.2

AI

0.01

2.9

39.3

Nuclear

0.09

1.0

7.7

Non-anthropogenic

0.03

0.1

0.4

Total

0.9

29.1

103.5

Some notes:

That’s deaths in expectation worldwide. But the value of a statistical life varies by country: governments have different resources and the cost of interventions in different places varies.

So the most straightforward way to think about the worth of catastrophic and extinction risk reduction is to ask how much this would be worth in a given country. Let’s take the US as an example.

First we need US deaths in expectation:

 

US deaths in expectation (millions)

 

2030

2050

2100

Catastrophic risk (>10% of humans die in 5 years)

Bio

--

--

0.7

AI

0.004

0.3

0.8

Nuclear

0.2

0.7

1.6

Non-anthropogenic

0.001

0.006

0.02

Total

0.3

1.4

3.6

Extinction risk (human population <5000)

Bio

--

--

0.05

AI

0.0004

0.1

1.5

Nuclear

0.004

0.04

0.3

Non-anthropogenic

0.001

0.01

0.02

Total

0.04

1.1

3.9

Workings here.

We can then assume a US value for a statistical life, and a discount rate, and use these to estimate how much averting the deaths in expectation is ‘worth’ to the US government.

Assuming $7m as the value of a statistical life, and a 3% annual discount rate, the value to the US government of reducing total initial risks by 1% (not one percentage point)[13] would be as follows:

 

Value of a 1% reduction in risk, assuming VSL at $7m and discount rate at 3% (billions of dollars)

 

2030

2050

2100

Catastrophic risk (>10% of humans die in 5 years)
Bio

--

--

$5.0

AI

$0.2

$8.4

$5.9

Nuclear

$9.7

$21.0

$11.0

Non-anthropogenic

$0.05

$0.2

$0.1

Total

$16.5

$44.2

$24.9

Extinction risk (human population <5000)
Bio

--

--

$0.3

AI

$0.02

$3.4

$10.5

Nuclear

$0.2

$1.1

$2.0

Non-anthropogenic

$0.1

$0.2

$0.12

Total

$1.9

$34.5

$27.5

Workings here.

There are a few reasons to expect these numbers to underestimate the value of catastrophic and extinction risk reduction:

What if I only care about the next few decades?

Suppose I don’t take extinction risk seriously, but I am interested in the XPT forecasts on catastrophic risks. That said, I think that 2030 is so soon that catastrophe seems extremely unlikely, and I don’t care much about things as far out as 2100, partly because I’m sceptical that we can influence things on that timescale, and partly because I only care about current lives. I want to know how much reducing catastrophic risk by 1% by 2050 would be worth, assuming $7m VSL and a 3% discount rate.

That would give me something like this:

Catastrophic risk

Total value of 1% risk reduction by 2050, millions* of dollar

Annual value, millions* of dollars**

AI

$8,400

$299.4

Nuclear

$21,000

$750.6

Non-anthropogenic

$170

$6.2

Total

$44,000

$1,579.2

* Note that this table displays millions of dollars, and the previous tables displayed billions.

**This is just a naive division of the total by 28 (the XPT tournament took place in 2022). Workings here.

How does this compare to current annual spending on these risks? There isn’t good data here, but to give some ballpark ideas:

Note that it’s unclear what level of risk reduction those figures correspond to, so it’s not clear what the direct comparison should be between current total spending and the value of a 1% risk reduction.

What about the value of catastrophic and extinction risk reduction worldwide?

Most of the people potentially affected by catastrophic and extinction risks aren’t US citizens. Can we say anything about how much catastrophic and extinction risk reduction is worth globally, using the VSL method?

Not very accurately, but it might be interesting to have a go anyway.

The problems with extrapolating this method worldwide are:

That said:

 

Value of a 1% reduction in risk, assuming VSL at $7m and discount rate at 3% (billions of dollars)

 

2030

2050

2100

Catastrophic risk (>10% of humans die in 5 years)
Bio

--

--

$130.0

AI

$4.7

$216.9

$153.9

Nuclear

$236.2

$543.7

$288.9

Non-anthropogenic

$1.2

$4.5

$3.6

Total

$401.6

$1,143.8

$653.7

Extinction risk (human population <5000)
Bio

--

--

$8.7

AI

$0.5

$89.1

$274.5

Nuclear

$4.7

$29.7

$53.5

Non-anthropogenic

$1.9

$4.2

$3.1

Total

$47.2

$891.2

$722.3

Workings here.

Summing up

  1. ^

    This row is the sum of the two following rows (catastrophic risk from engineered and from natural pathogens respectively). We did not directly ask for catastrophic biorisk forecasts.

  2. ^

     Because of concerns among our funders about information hazards, we did not include this question in the main tournament, but we did ask about risks from engineered and natural pathogens in a one-shot separate postmortem survey to which most XPT participants responded after the tournament. 

  3. ^

     Because of concerns among our funders about information hazards, we did not include this question in the main tournament, but we did ask about risks from engineered and natural pathogens in a one-shot separate postmortem survey to which most XPT participants responded after the tournament. 

  4. ^

     This question was asked independently, rather than inferred from questions about individual risks.

  5. ^

    This row is the sum of the two following rows (catastrophic risk from engineered and from natural pathogens respectively). We did not directly ask for catastrophic biorisk forecasts.

  6. ^

     Because of concerns among our funders about information hazards, we did not include this question in the main tournament, but we did ask about risks from engineered and natural pathogens in a one-shot separate postmortem survey to which most XPT participants responded after the tournament. 

  7. ^

     Because of concerns among our funders about information hazards, we did not include this question in the main tournament, but we did ask about risks from engineered and natural pathogens in a one-shot separate postmortem survey to which most XPT participants responded after the tournament. 

  8. ^

     This question was asked independently, rather than inferred from questions about individual risks.

  9. ^

     Shulman and Thornley, p. 12; from (U.S. Department of Transportation 2021a, 2021b).

  10. ^

     Shulman and Thornley, p. 12; from (Graham 2008: 504).

  11. ^

     See here and p. 504 here.

  12. ^

     You can see what the 7% discount rate figures look like in the workings spreadsheet for this post.

  13. ^

     I mean a 1% reduction of the total initial risk, rather than a reduction of the total risk by 1 percentage point.


Mo Putera @ 2023-07-31T04:24 (+7)

I think I buy that interventions which reduce either catastrophic or extinction risk by 1% for < $1 trillion exist. I'm less sure as to whether many of these interventions clear the 1,000x bar though, which (naively replacing US VSL = $7 mil with AMF's ~$5k) seems to imply 1% reduction for < $1 billion. (I recall Linch's comment being bullish and comfortable on interventions reducing x-risk ~0.01% at ~$100 mil, which could either be interpreted as ~100x i.e. in the ballpark of GiveDirectly's cash transfers, or as aggregating over a longer timescale than by 2050; the latter is probably the case. The other comments to that post offer a pretty wide range of values.) 

That said, I've never actually seen a BOTEC justifying an actual x-risk grant (vs e.g. Open Phil's sample BOTECs for various grants with confidential details redacted), so my remarks above seem mostly immaterial to how x-risk cost-effectiveness estimates inform grant allocations in practice. I'd love to see some real examples.