Famine deaths due to the climatic effects of nuclear war

By Vasco Grilo🔸 @ 2023-10-14T12:05 (+40)

The views expressed here are my own, not those of Alliance to Feed the Earth in Disasters (ALLFED), for which I work as a contractor. Please assume this is always the case unless stated otherwise.

Summary

Introduction

have been assuming the importance of the climatic effects of nuclear war is roughly in agreement with Denkenberger 2018 and Luisa’s post, but I had not looked much into the relevant literature myself. I got interested in doing so following some of the discussion in my global warming post, and Bean’s and Mike’s analyses.

The initial motivation for my analysis was combining the results of 2 views about nuclear winter:

Denkenberger 2018 did not integrate the results of Reisner 2018, which was published afterwards[4]. Luisa says:

As a final point, I’d like to emphasize that the nuclear winter is quite controversial (for example, see: Singer, 1985; Seitz, 2011; Robock, 2011; Coupe et al., 2019; Reisner et al., 2019; Pausata et al., 2016; Reisner et al., 2018; Also see the summary of the nuclear winter controversy in Wikipedia’s article on nuclear winter). Critics argue that the parameters fed into the climate models (like, how much smoke would be generated by a given exchange) as well as the assumptions in the climate models themselves (for example, the way clouds would behave) are suspect, and may have been biased by the researchers’ political motivations (for example, see: Singer, 1985; Seitz, 2011; Reisner et al., 2019; Pausata et al., 2016; Reisner et al., 2018). I take these criticisms very seriously — and believe we should probably be skeptical of this body of research as a result. For the purposes of this estimation, I assume that the nuclear winter research comes to the right conclusion. However, if we discounted the expected harm caused by US-Russia nuclear war for the fact that the nuclear winter hypothesis is somewhat suspect, the expected harm could shrink substantially.

I also felt like Bean’s analysis underweighted Rutgers’ view, and Michael Hinge’s underweighted Los Alamos’ (see my comments).

My goal is estimating the famine deaths due to the climatic effects of nuclear war, not all famine deaths, nor heat mortality (related to hot or cold exposure). I also:

Famine deaths due to the climatic effects

Overview

I arrived at 12.9 M (= 0.0330*392*10^6) famine deaths due to the climatic effects of nuclear war before 2050, multiplying:

Unlike Denkenberger 2018 and Luisa, I did not run a Monte Carlo simulation modelling all non-probabilistic variables as distributions, but I do not think that would meaningfully move my estimate of the expected deaths:

Figure 2. SORT scenarios. (a) Casualties (fatalities plus injuries) and fatalities only and (b) soot generation as a function of the number of 100-kt explosions in China, Russia, and the US. Regions are targeted in decreasing order of population density. In the US, for example, the density would fall below 550 people/km2 after the 1000th target.

Probability of large nuclear war

I put the probability of large nuclear war before 2050 at 3.30 % (= 0.32*0.103), which is the product between:

I motivate these values below.

Probability of at least one offensive nuclear detonation

I placed the probability of at least one offensive nuclear detonation before 2050 at 32 %, in agreement with Metaculus’ community prediction on 31 August 2023[7]. This is reasonable based on:

Probability of escalation into large nuclear war

I presupposed a beta distribution for the fraction of nuclear warheads being detonated before 2050 given at least one offensive nuclear detonation before then. I defined it from 61th and 89th percentiles equal to 1.06 % (= 100/(9.43*10^3)) and 10.6 % (= 1*10^3/(9.43*10^3)), given:

The alpha and beta parameters of the beta distribution are 0.189 and 5.03, and its cumulative distribution function (CDF) is below. The horizontal axis is the fraction of nuclear warheads being detonated, and the vertical one the probability of less than a certain fraction being detonated. The probability of escalation into a large nuclear war, which I defined as at least 1.07 k offensive nuclear detonations, corresponding to 11.3 % (= 1.07*10^3/(9.43*10^3)) of nuclear warheads being detonated, is 10.3 %[12].

CDF of the fraction of nuclear warheads being detonated.

Soot injected into the stratosphere

I expect 22.1 Tg (= 2.09*10^3*0.215*0.0491) of soot being injected into the stratosphere in a large nuclear war. This is the product between:

I explain the above estimates in the next sections. I neglected counterforce nuclear detonations because:

Offensive nuclear detonations

I expect 2.09 k (= 1 + 0.221*9.43*10^3) offensive nuclear detonations in a large nuclear war. This is 1 plus the product between:

The 5th and 95th percentile fraction of nuclear warheads being detonated in a large nuclear war are 11.8 % and 43.6 %, which correspond to 1.11 k (= 1 + 0.118*9.43*10^3) and 4.11 k (= 1 + 0.436*9.43*10^3) offensive nuclear detonations.

I compared the offensive nuclear detonations, given at least one before 2050, implied by my beta distribution with those of a Metaculus’ question whose predictions I ended up not using. The 5th, 50th and 95th percentile of the beta distribution are 1.84*10^-6 %, 0.362 % and 19.2 %[15], and the respective detonations given at least one are:

The mean of my beta distribution is 3.62 % (= 0.189/(0.189 + 5.03)), and therefore I expect 342 (= 1 + 0.0362*9.43*10^3) offensive nuclear detonations given one offensive nuclear detonation before 2050, which is 9.74 (= 342/35.1) times my median detonations. Additionally, my 95th percentile is 1.81 k (= 1.81*10^3/1.00) times my 5th percentile. Such high ratios illustrate nuclear war is predicted to be heavy-tailed, as has been the case for non-nuclear wars.

From the above bullets, the predictions for the number of detonations I arrived at fitting a beta distribution to the forecasts for 2 Metaculus’ questions about the probability of escalation to large nuclear wars (100 and 1 k detonations) are not quite in line with the forecasts for another Metaculus’ question explicitly about the number of detonations. The large difference for the 95th percentile is relevant because the right tail has a significant influence on the expected detonations, as can be seen from the high ratio between my mean and median detonations. I decided to rely on the 2 Metaculus’ questions about escalation because:

Countervalue nuclear detonations

I assumed 21.5 % of the offensive nuclear detonations to respect countervalue targeting. This was Metaculus’ median community prediction on 30 September 2023 for the fraction of offensive nuclear detonations before 2050 which will be countervalue.

I presumed 100 % total burned area as a fraction of the burned area assuming different detonations did not compete for fuel, i.e. that overlapping between burned areas is negligible. David Denkenberger commented that some additional area would be burned thanks to the combined effects of multiple detonations. I tend to agree, but:

Yield

I considered a yield per countervalue nuclear detonation of 189 kt (= (600*335 + 200*300 + 1511*90 + 25*8 + 384*455 + 500*(5*150)^0.5 + 288*400 + 200*(0.3*170)^0.5)/3708). This is the mean yield of the United States nuclear warheads in 2023 (deployed or in reserve, but not retired), which I got from data in Table 1 of Kristensen 2023. For the rows for which a range was provided for the yield, I used the geometric mean between its lower and upper bound[18].

For context, my yield of 189 kt is:

For the 2.09 k offensive nuclear detonations I expect in a large nuclear war, the minimum and maximum mean yield are 66.1 kt (= (200*(0.3*170)^0.5 + 25*8 + 500*(5*150)^0.5 + 1365*90)/(2.09*10^3)) and 290 kt (= (384*455 + 288*400 + 600*335 + 200*300 + 618*90)/(2.09*10^3)).

I investigated the relationship between the burned area and yield a little, but, as I said just above, I do not think it is that important whether the area scales with yield to the power of 2/3 or 1. Feel free to skip to the next section. In short, an exponent of:

The emitted soot is proportional to the burned area. So using the mean yield as I did presupposes burned area is proportional to yield, which is what is supposed in Toon 2008. “In particular, since the area within a given thermal energy flux contour varies linearly with yield for small yields, we assume linear scaling for the burned area”. I guess this is based on the following passage of this chapter of The Medical Implications of Nuclear War (the source provided in Toon 2008):

Thermal energy, unlike blast energy [which “fills the volume surrounding it”], instead radiates out into the surroundings. Thermal energy from a detonation will therefore be distributed over a hypothetical sphere that surrounds the detonation point. If the sphere's area is larger in direct proportion to the yield of a detonation, then the amount of energy per unit area passing through its surface would be unchanged. The radius of this hypothetical sphere varies as the square root of its area. Hence, the range at which a given amount of thermal energy per unit area is deposited varies as the square root of the yield.

Presumably, Toon 2008 assumes the burned area is defined by this range, and therefore it is proportional to yield (since a circular area is proportional to the square of its radius). With respect to this, Bean said:

Nor is the assumption that burned area will scale linearly with yield a particularly good one. I couldn’t find it in the source they cite, and it flies in the face of all other scaling relationships around nuclear weapons.

[...]

per Glasstone p.108, blast radius typically scales with the 1/3rd power of yield, so we can expect damaged area from fire as well as blast to scale with the yield^2/3 [since area is proportional to the square of the radius].

According to The Medical Implications of Nuclear War (see quotation above), the blasted area is indeed proportional to yield to the power of 2/3, but the same may not apply to burned area (see quotation above starting with “Thermal energy”). In fact, the results of Nukemap seem to be compatible with the assumption that the ground area enclosed by a spherical surface of a given energy flux is proportional to yield. For 0.1, 1 and 10 times my yield of 189 kt, i.e. 18.9, 189 and 1.89 k kt, the ground area enclosed by a spherical surface whose energy flux is 146 J/cm^2, for which “dry wood usually burns”, are:

The mean of the above 4 exponents is 1.01[21] (= (0.956 + 0.928 + 1.14 + 1.00)/4), which suggests a value of 1 is appropriate. Nevertheless, I do not know how the above areas are estimated in Nukemap.

Energy flux following an inverse-square law, as described in The Medical Implications of Nuclear War, makes sense if atmospheric losses are negligible, like with the Sun’s energy radiating outwards into space. Intuitively, I would have thought the losses were sufficiently high for the exponent to be lower than 1, and GPT-4 also guessed an exponent of 2/3 would be a better approximation. However, Nukemap’s results do support an exponent of 1.

Soot injected into the stratosphere per countervalue yield

I set the soot injected into the stratosphere per countervalue yield to 2.60*10^-4 Tg/kt (= (3.15*10^-5*0.00215)^0.5). This is the geometric mean between 3.15*10^-5 and 0.00215 Tg/kt[18], which I arrived at by adjusting results from Reisner 2018 and Reisner 2019, and Toon 2008 and Toon 2019. I describe how I did this in the next 2 sections, and discuss some considerations I did not cover in these sections in the one after them.

There are other studies which have analysed how much of the emitted soot is injected into the stratosphere, but I think only Reisner 2018, Reisner 2019 and Wagman 2020 modelled the whole causal chain. From Wagman 2020:

An analysis of whether fires ignited by a nuclear war will cause global climatic and environmental consequences must address the following:

[...]

The Reisner et al. (2018) approach deviates from previous efforts by modeling aspects of all four bullet points above

[...]

Motivated by the different conclusions that have been reached for this scenario, we make our own assessment, which also uses numerical models to address aspects of all four factors bulleted above.

I did not integrate evidence from Wagman 2020 (whose main author is affiliated with Lawrence Livermore National Laboratory), because, rather than estimating the emitted soot as Reisner 2018 and Reisner 2019, it sets it to the soot injected into the stratosphere in Toon 2007:

Finally, we choose to release 5 Tg (5¡10^12 g) BC into the climate model per 100 fires, for consistency with the studies of Mills et al. (2008, 2014), Robock et al. (2007), Stenke et al. (2013), Toon et al. (2007), and Pausata et al. (2016). Those studies use an emission of 6.25 Tg BC and assume 20% is removed by rainout during the plume rise, resulting in 5 Tg BC remaining in the atmosphere.

I did not include direct evidence from the atomic bombings of Hiroshima and Nagasaki because I did not find empirical data about the resulting injections of soot into the stratosphere. Relatedly, Robock 2019 says:

I also excluded evidence from Tambora’s eruption. There were global impacts according to Oppenheimer 2003, but their magnitude is unclear, and I think the world has evolved too much in the last 200 years for me to extrapolate.

Reisner 2018 and Reisner 2019

I estimated a soot injected into the stratosphere per countervalue yield of 3.15*10^-5 Tg/kt (= 0.0473/(1.50*10^3)) for Reisner 2018 and Reisner 2019. I calculated it from the ratio between:

I got 0.224 Tg (= 12.3*0.855*0.0213) of emitted soot, multiplying:

I concluded 21.1 % (= 0.0621*3.39) of emitted soot is injected into the stratosphere, multiplying:

The estimate of 6.21 % of emitted soot being injected into the stratosphere in the 1st 40 min is derived from the rubble case of Reisner 2018, which did not produce a firestorm. However, in response to Robock 2019, Reisner 2019 run:

Two simulations at higher fuel loading that are in the firestorm regime (Glasstone & Dolan, 1977): the first simulation (4X No-Rubble) uses a fuel load around the firestorm criterion (4 g/cm2) and the second simulation (Constant Fuel) is well above the limit (72 g/cm2).

These simulations led to a soot injected into the stratosphere in the 1st 40 min per emitted soot of 5.45 % (= 0.461/8.454) and 6.44 % (= 1.53/23.77), which are quite similar to the 6.21 % of Reisner 2018 I used above. Reisner 2019 also notes:

Of note is that the Constant Fuel case is clearly in the firestorm regime with strong inward and upward motions of nearly 180 m/s during the fine-fuel burning phase. This simulation included no rubble, and since no greenery (trees do not produce rubble) is present, the inclusion of a rubble zone would significantly reduce BC production and the overall atmospheric response within the circular ring of fire.

This suggests a firestorm is not a sufficient condition for a high soot injected into the stratosphere per emitted soot.

Toon 2008 and Toon 2019

I deducted a soot injected into the stratosphere per countervalue yield of 0.00215 Tg/kt (= 945/(440*10^3)) for Toon 2008 and Toon 2019. I computed it from the ratio between:

I got 1.35 k Tg (= 180*7.52) of emitted soot, multiplying:

I concluded 70.0 % (= (1 - 0.20)*(1 - 0.125)) of emitted soot is injected into the stratosphere, in agreement with Toon 2019. This stems from:

You might have noticed that I discounted the results of Reisner 2018 to account for their overestimation of the emitted soot per burned fuel, but that I did not do that for Toon 2008. I think this is right because, right after “how much of the fuel is converted into soot”, there is a reference to Turco 1990, which estimates an emitted soot per burned fuel very similar to what I assumed in the previous section[22].

Toon 2019 justifies the 20 % soot removal during injection into the upper troposphere citing Toon 2007, which in turn backs it up citing Turco 1990[26], but I noted this does not justify the value that well. From the header of Table 2 of Turco 1990, “the prompt soot removal efficiency [i.e. soot removal during injection into the upper troposphere[27]] is taken to be 20% (range of 10 to 25%)”, which checks out, but it is mentioned that:

Originally, we (2) [Turco 1983] estimated that 25 to 50% of the smoke mass would be immediately scrubbed from urban fires by induced precipitation. However, based on current data, it is more reasonable to assume that, on average, <=10 to 25% of the soot emission is likely to be removed in such a manner.

Nevertheless, as far as I can tell, the “current data” is not discussed in Turco 1990. I would have expected to see a justification for the update, as the 20 % prompt soot removal assumed in Turco 1990 is lower than the lower bound of 25 % attributed to Turco 1983. In addition, I was not able to confirm the soot removal of 25 % to 50 % quoted above, searching in Turco 1983 for “%”, “25 percent”, “50 percent”, “0.25”, “0.5” and “rain”. It is possible a soot removal of 25 % to 50 % is implied by the assumptions or results of Turco 1983, although it is not explicitly mentioned, but it looks like this might not be so. Turco 1983 appears to have used a soot removal of 20 % as Turco 1990. From Table 2, “80 percent [of the soot was assumed to be injected] in the stratosphere”. I did not find an explanation of this value searching for “80 percent” and “0.8”.

Brian Toon, the 1st author of Toon 2007, Toon 2008 and Toon 2019, and 2nd of Turco 1983 and Turco 1990, clarified the 20 % prompt soot removal in Toon 2007 was calculated from (1 minus) the ratio between the concentration of smoke and carbon monoxide at the stratosphere and near natural fires. I tried to obtain the 20 % with this approach, but did not have success. I assume Brian’s clarification refers to the following passage of Toon 2007:

According to Andreae et al. (2001) in natural fires the ratio of injected smoke aerosol larger than 0.1 Âľm to enhanced carbon monoxide concentrations is in the range 5–20 cm^3/ppb near the fires. Jost et al. (2004) found ratios ∟7 [cm^3/ppb] in smoke plumes deep within the stratosphere over Florida that had originated a few days earlier in Canadian fires, implying that the smoke particles had not been significantly depleted during injection into the stratosphere (or subsequent transport over thousands of kilometers in the stratosphere). Such evidence is consistent with the choice of R=0.8 for smoke removal in pyroconvection.

On the one hand, I agree with the last sentence, as the quoted evidence is consistent with a smoke removal in pyroconvection between 0 (7 > 5) and 65 % (= 1 - 7/20), which encompasses 20 % (= 1 - 0.8). On the other hand, this value seems to be pessimistic. Assuming a ratio between the concentration of smoke and carbon monoxide near the fires of 12.5 cm^3/ppb[21] (= (5 + 20)/2), R = 56.0 % (= 7/12.5) of smoke would be injected into the upper troposphere, which suggests a prompt soot removal of 44.0 % (= 1 - 0.560), 2.20 (= 0.440/0.20) times as high as the value supposed in Toon 2007.

I shared the above reasoning with Brian, but his best guess continues to be 20 % soot removal during the injection into the upper troposphere. So I relied on that value to estimate the soot injected into the stratosphere per countervalue yield at the start of this section.

As a side note, Turco 1983 presents an emitted soot per yield of land near-surface and surface detonations of 1.0*10^-4 and 3.3*10^-4 Tg/kt (see Table 2), which are 3.26 % (= 1.0*10^-4/0.00307) and 10.7 % (= 3.3*10^-4/0.00307) the 0.00307 Tg/kt (= 0.00215/0.7) I inferred from Toon 2008[28]. Brian Toon clarified the lower soot emissions in Toon 2008 are explained by this study considering a less fuel per area owing to more detonations with larger yield, which imply a larger burned area with lower population density. I think this makes sense.

Considerations influencing the soot injected into the stratosphere

There are a number of considerations I have not covered influencing the soot injected into the stratosphere per countervalue yield. I have little idea about their net effect, but I point out some of them below. Relatedly, feel free to check Hess 2021, and the comments on Bean’s and Mike’s post.

Overestimating soot injected into the stratosphere

Besides the pessimistic assumption regarding the soot emissions per burned area, which I corrected for, Reisner 2018 says:

For the vertical transport of the BC, very calm ambient winds are assumed in the model, so to prevent rapid dispersion of the BC in the plume. The height of burst is determined as twice the fallout-free height, so to minimize building damage and to maximize the number of ignited locations. Fire propagation in the model occurs primarily via convective heat transfer and spotting ignition due to firebrands, and the spotting ignition model employs relatively high ignition probabilities as another worst case condition

[...]

The wind speed profile was chosen to be high enough to maintain fire spread but low enough to keep the plume from tilting too much to prevent significant plume rise (worst case). Wind direction is set as 270° (west-to-east, +x direction) for all heights, with no directional shear, and a weakly stable atmosphere was used below the tropopause to assist plume rise (worst case).

David:

Underestimating soot injected into the stratosphere

Secondary ignitions were neglected in Reisner 2018:

The impact of secondary ignitions, such as gas line breaks, is not considered and research is still needed to determine their impact on a mass fire's intensity. For example, evidence of secondary ignitions in the Hiroshima conflagration ensuing the nuclear bombing (National Research Council, 1985), or utilization of incendiary bombs in Dresden and Hamburg (Hewitt, 1983), led to unique conditions that resulted in significantly enhanced fire behavior.

David commented â€œexisting heating/cooking fires spreading” “is all that was required for the San Francisco earthquake firestorm”. Bean noted “urban fires are down 50% since the 1940s and way more since 1906”, when the San Francisco earthquake and firestorm happened. GPT-4 very much agreed urban fires are now less likely to occur[29]. On the other hand, David commented:

As noted in Robock 2019, fires, and therefore soot production and elevation, were only modelled for 40 min:

Reisner et al. stated that their fires were of surprisingly short duration, “because of low wind speeds and hence minimal fire spread, the fires are rapidly subsiding at 40 min.” However, they do not show the energy release rate so that we can tell if the fuel has been consumed within 40 minutes. And their claims of low wind speed are erroneous, as they choose wind speeds higher than typically observed in Atlanta. Real-world experience with firestorms such as in Hiroshima or Hamburg during World War II or in San Francisco after the 1906 earthquake (London, 1906), and of conflagrations, such as after the bombing of Tokyo during World War II (Caidan, 1960), suggests that a 40-minute mass fire is a dramatic underestimate; most of these fires last for many hours. A longer fire would make available more heat and buoyancy to inject soot to higher altitudes. If their fire had a short duration, and did not simply blow off their grid, it was likely due to the low fuel load assumed in their target area and combustion that did not consume all of the available fuel.

Reisner 2019 replied that:

Another important point concerning these simulations is that the rapid burning of the fine fuels leads to both a reduction in oxygen that limits combustion and a large upward transport of heat and mass that stabilizes the upper atmosphere above and downwind of the firestorm. These dynamical and combustion processes help limit fire activity and BC production once the fine material has been consumed (timescale < 30 min). Hence, the primary time period for BC injection that could impact climate occurs during a relatively short time period compared to the entirety of the fire or the continued burning and/or smoldering of thicker fuels.

[...]

While the full duration is not modeled, we argue that the primary atmospheric response from a nuclear detonation is the rapid burning of the fine fuels. Thick fuels will take longer to burn but will induce less atmospheric response and produce and inject less BC to upper atmosphere. Further, during the later time period, the upper atmosphere stabilizes from the large injection of heat and mass. Firestorms such as Dresden were maintained not only by burning of thick fuels but also by the injection of highly flammable fuel from the incendiary bombs, which we believe acted as fine fuel replacement.

In any case, it still seems to me Robock 2019 might have a valid point:

I guess these 2 arguments are stronger for firestorms, which were not produced in Reisner 2018. The 2 simulations of Reisner 2019 concern firestorms, but I would like to see:

Overestimating/Underestimating soot injected into the stratosphere

Robock 2019 contended that:

Water vapor allows for latent heat release when clouds form. Numerous studies have shown that sensible and latent heat release is essential to lofting smoke in either firestorms (e.g., Penner et al., 1986) or conflagrations (Luderer et al., 2006). Reisner et al. stated “A dry atmosphere was utilized, and pyrocumulus impacts or precipitation from pyro-cumulonimbus were not considered. While latent heat released by condensation could lead to enhanced vertical motions of the air, increased scavenging of soot particles by precipitation is also possible. These processes will be examined in future studies using HIGRAD-FIRETEC.” By not considering pyrocumulonimbus clouds, which by the latent heat of condensation can inject soot into the stratosphere, they have eliminated a major source of buoyancy that would loft the soot. They seem to suggest that any lofting of soot would be balanced by significant precipitation scavenging, but there is no evidence for that assumption. In fact, forest fires triggered pyrocumulonimbus clouds that lofted soot into the lower stratosphere in August 2017 over British Columbia, Canada. Over the succeeding weeks, the soot was lofted many more kilometers, as observed by satellites, because it was heated by the Sun (Yu et al., 2019). This fire is direct evidence of the self-lofting process Robock et al. (2007) and Mills et al. (2014) modeled before. It also shows that precipitation in the cloud still allowed massive amounts of smoke to reach the stratosphere.

Reisner 2019 replied that:

The latent heat release may or may not lead to enhanced smoke lofting depending on the complex microphysical and mesoscale processes. Robock et al. (2019) cite wildfires in extremely dry conditions that prevent precipitation formation and do not model the process. Precipitation scavenging of BC can be much higher than is currently assumed (20%) (Yu 2018). We and the community agree that research is needed to quantify the role latent heat plays in BC movement and washout.

Meanwhile, Tarshish 2022 concluded:

Direct numerical and large-eddy simulations indicate that dry firestorm plumes possess temperature anomalies that are less than the requirements for stratospheric ascent by a factor of two or more. In contrast, moist firestorm plumes are shown to reach the stratosphere by tapping into the abundant latent heat present in a moist environment. Latent heating is found to be essential to plume rise, raising doubts about the applicability of past work [namely, Reisner 2018 and Reisner 2019] that neglected moisture.

Nonetheless, as hinted by Reisner 2019, moisture not only helps the emitted soot reach the stratosphere, but it also contributes to it being rained out. This latter process is not modelled in Tarshish 2022:

A limitation of the theory and simulations presented here is the absence of soot microphysics. Soot aerosols provide cloud condensation nuclei that may alter the drop size distribution and impact auto-conversion. This aerosol effect is expected to invigorate convection (Lee et al., 2020), lofting the plume higher. Coupling soot to microphysics, however, also enables soot to rain out, which could remove much of the soot from the rising plume as suggested in Penner et al. (1986). Given the essential role of moisture in lofting firestorm plumes we identified here, future research should investigate how these second-order microphysical effects impact firestorm soot transport. Another aspect not addressed here and deserving of future study is the radiative lofting of plumes, which has been observed to substantially lift wildfire plume soot for months after the fire (Yu et al., 2019).

Available fuel

Available fuel for counterforce

For counterforce, I calculated an available fuel per burned area of 3.07 g/cm^2 (= (11*10^6*2.06*10^3 + 8*10^9)*10^(-5*2)). I got this from the 1st equation in Box 1 of Toon 2008:

Available fuel for countervalue

For countervalue, I considered an available fuel per burned area of 21.1 g/cm^2 (= (0.00770*34.6 + 0.325*27.9 + 0.079*13.9 + 0.00770*13.0 + 0.140*8.95)/(0.00770 + 0.325 + 0.079 + 0.00770 + 0.140)). This is a weighted mean with:

For context, my available fuel per area for countervalue nuclear detonations is:

Famine deaths due to the climatic effects

I expect 392 M deaths (= 0.0443*8.86*10^9) following a nuclear war which resulted in 22.1 Tg of soot being injected into the stratosphere. I found this multiplying:

I explain these estimates in the next sections.

Famine death rate due to the climatic effects

Defining large nuclear war

I agree with Christian that deaths in a nuclear war increase superlinearly with offensive nuclear detonations. As Luisa, I guess famine deaths due to the climatic effects increase logistically with soot injected into the stratosphere. For simplicity, I approximate the logistic function as a piecewise linear function which is 0 for low levels of soot.

The minimum offensive nuclear detonations based on which I define a large nuclear war marks the end of the region for which famine deaths due to the climatic effects are 0. From Fig. 5b of Xia 2022, for the case in which there is no international food trade, all livestock grain is fed to humans, and there is no household food waste (top line), adjusted to include international food trade without equitable distribution dividing by 94.8 % food support “when food production does not change [0 Tg] but international trade is stopped”, there are no deaths for 10.5 Tg[39]. I guess the societal response will have an effect equivalent to assuming international food trade, all livestock grain being fed to humans, and no household food waste (see next section), so I supposed the famine deaths due to the climatic effects are negligible up to the climate change induced by 10.5 Tg of soot being injected into the stratosphere in Xia 2022.

I believe Xia 2022 overestimates the duration of the climatic effects, so I considered the linear part of the logistic function starts at 11.3 Tg (instead of 10.5 Tg):

The similarity between the soot injections just above means the shorter climatic effects end up having a minor difference. What matters is the severity of the worst initial years, and my e-folding time is still sufficiently long for these to be roughly as bad.

I estimated 0.0491 Tg of soot injected into the stratosphere per countervalue nuclear detonation, so I expect an injection of 11.3 Tg requires 230 (= 11.3/0.0491) countervalue nuclear detonations. Since I only expect 21.5 % of offensive nuclear detonations to be countervalue, I defined a large nuclear war as having at least 1.07 k (= 230/0.215) offensive nuclear detonations, and assume no famine deaths due to the climatic effects for less than that.

David thinks having famine deaths due to the climatic effects starting to increase linearly after an injection of soot into the stratosphere of 0 Tg is much more accurate than after 11.3 Tg, because there is already significant famine now. The deaths from nutritional deficiencies and protein-energy malnutrition were 252 k and 212 k in 2019, and I suspect the real death toll is about 1 order of magnitude higher[42]. Nevertheless, I am not trying to estimate all famine deaths. I am only attempting to arrive at the famine deaths due to the climatic effects, not those resulting directly or indirectly from infrastructure destruction. I expect this will cause substantial disruptions to international food trade. As Matt Boyd commented:

Much of the catastrophic risk from nuclear war may be in the more than likely catastrophic trade disruptions, which alone could lead to famines, given that nearly 2/3 of countries are net food importers, and almost no one makes their own liquid fuel to run their agricultural equipment.

Relatedly, from Xia 2022:

Impacts in warring nations are likely to be dominated by local problems, such as infrastructure destruction, radioactive contamination and supply chain disruptions, so the results here apply only to indirect effects from soot injection in remote locations.

Famine death rate due to the climatic effects of large nuclear war

I would say the famine death rate due to the climatic effects of a large nuclear war would be 4.43 % (= 1 - (0.993 + (0.902 - 0.993)/(24.6 - 14.6)*(18.7 - 14.6))). I calculated this:

Some reasons why my famine death rate due to the climatic effects may be too:

I stipulate the above roughly cancel out, although I am not so confident. I think high income countries without significant infrastructure destruction would respond particularly well. Historically, famines have only affected countries with low real GDP per capita.

On the topic of lower consumption of healthy and unhealthy food, Alexander 2023 studies the effect of energy and export restrictions on deaths due to changes in red meat, fruits and vegetables consumption, and the fraction of the population who is underweight, overweight and obese. Lower red meat consumption, and less people being overweight and obese decreases deaths. Lower consumption of fruits and vegetables, and more people being underweight increases deaths. The results of the study are below.

The figure suggests the net effect corresponds to an increase in deaths. I am confident this would be the case for Sub-Saharan Africa, but not so much for other regions. The fraction of calories coming from animals increases with GDP per capita, so cheaper diets have a lower fraction of calories coming from meat, and the relative reduction in meat consumption would be higher than that in fruits and vegetables. I think Alexander 2023 takes this into account:

As prices increase, the model represents a consumption shift away from ‘luxury’ goods such as meat, fruit, and vegetables back towards staple crops, as well as lower consumption overall.

Alexander 2023 still concludes higher prices would lead to more deaths, but I wonder whether rationing efforts would ensure sufficient consumption of fruits and vegetables. I sense the deaths owing to decreased consumption of fruits and vegetables are overestimated in the figure above, but I have barely looked into the question.

Population

I considered a global population of 8.86 G (= (8.61 + (9.59 - 8.61)/(2052 - 2032)*(2037 - 2032))*10^9):

Uncertainty

To obtain a distribution for the famine death rate due to the climatic effects of a large nuclear war, without running a Monte Carlo simulation, I assumed a beta distribution with a ratio between the 95th and 5th percentiles equal to 702 (= e^((ln(3.70)^2 + ln(4.39)^2 + ln(68.3)^2 + ln(100)^2)^0.5)). This is the result of supposing the following follow independent lognormal distributions with ratios between the 95th and 5th percentile equal to[45]:

Simpler approaches to determine the ratio would lead to significantly different results:

Ideally, I would have run a Monte Carlo simulation with my best guess distributions, instead of assuming just lognormals. Regardless, I would have used independent distributions for simplicity, so the results would arguably be similar.

For an expected famine death rate due to the climatic effects of 4.43 %, a beta distribution with 95th percentile 702 times the 5th percentile has alpha and beta parameters equal to 0.522 and 11.3. The respective CDF is below. The horizontal axis is the famine death rate due to the climatic effects, and the vertical one the probability of less than a certain death rate. The 5th and 95th percentile famine death rate due to the climatic effects are 0.0233 % and 16.4 %, which correspond to 2.06 M (= 2.33*10^-4*8.86*10^9) and 1.45 G (= 0.164*8.86*10^9) deaths given at least one offensive nuclear detonation before 2050.

CDF of the famine death rate due to the climatic effects given at least one offensive nuclear detonation before 2050.

Given my 3.30 % probability of a large nuclear war before 2050, there is a 96.7 % (= 1 - 0.0330) chance of negligible famine deaths due to the climatic effects before then, thus my 5th percentile deaths before 2050 are 0 (0.05 < 0.967). My 95th percentile respects the 84.4th percentile (= 1 - (1 - 0.95)/0.32) famine death rate due to the climatic effects given at least one offensive nuclear detonation before 2050[46], which is 9.06 %[47], equivalent to 803 M (= 0.0906*8.86*10^9) deaths.

Summarising, since there are 26 years (= 2050 - 2024) before 2050, my best guess for the annual famine deaths due to the climatic effects of nuclear war before then is 496 k (= 12.9*10^6/26), and my 5th and 95th percentile are 0 and 30.9 M (= 803*10^6/26). My 95th percentile is 62.3 (= 30.9*10^6/(496*10^3)) times my best guess, which means there is lots of uncertainty.

For context, my best guess for the famine deaths due to the climatic effects is similar to the 415 k caused by homicides in 2019, and my 95th percentile identical to the 28.6 M (= (18.56 + 10.08)*10^6) caused by cardiovascular diseases and cancers in 2019.

Bear in mind my estimates only refer to the famine deaths due to the climatic effects. I exclude famine deaths resulting directly or indirectly from infrastructure destruction, and heat mortality.

Cost-effectiveness of activities related to resilient food solutions

I calculated the expected cost-effectiveness of activities related to resilient food solutions, at decreasing famine deaths due to the climatic effects of nuclear war, from the ratio between[48]:

I arrived at the following values:

The effectiveness, horizon of effectiveness, age adjustment factor, and cost are defined below.

Decreasing famine deaths due to the climatic effects would arguably shorten the recovery period, thus increasing cumulative economic output. I have not analysed this indirect effect, hence underestimating cost-effectiveness, for consistency with neartermist cost-effectiveness analyses. These typically focus on the benefits to the people who were saved, not on how they change economic growth via their children.

Effectiveness

Based on Denkenberger 2016, I set the effectiveness to:

Denkenberger 2016 truncates the difference between the 2 lognormal of the 1st bullet, and those of the 2nd and 3rd at 1 % (and David thinks at 100 % too). For simplicity, I used the means of non-truncated lognormals, but I do not think this matters.

Horizon of effectiveness

Based on Denkenberger 2016, I assumed the horizon of effectiveness to be:

Age adjustment factor

I estimated an age adjustment factor of 82.5 % (= 42.1/51). I got 42.1 years (= 48.4*0.869) of healthy life which the mean person saved would leave from the product between:

For simplicity, I am:

Cost

Based on Denkenberger 2016, I determined the reciprocal of the expected reciprocal of the cost to be:

Results

The results are summarised in the tables below.

Probability of nuclear war

Probability of…

Value

At least one offensive nuclear detonation before 2050

32 %

Large nuclear war conditional on the above

10.3 %

Large nuclear war before 2050 (product of the above)

3.30 %

Soot injected into the stratosphere

Metric

Expected value

Offensive nuclear detonations in a large nuclear war

2.09 k

Yield per countervalue nuclear detonation (kt)

189

Soot injected into the stratosphere per countervalue yield (Tg/kt)

2.60*10^-4

Soot injected into the stratosphere per countervalue nuclear detonation (Tg)

0.0491

Soot ejected into the stratosphere in a large nuclear war (product of the above)

22.1

Famine deaths due to the climatic effects

Metric

Expected value (5th to 95th percentile)

Famine death rate due to the climatic effects in a large nuclear war

4.43 % (0.0233 % to 16.4 %)

Famine deaths due to the climatic effects in a large nuclear war

392 M (2.06 M to 1.45 G)

Famine deaths due to the climatic effects of nuclear war before 2050

12.9 M (0 to 803 M)

Annual famine deaths due to the climatic effects of nuclear war before 2050

496 k (0 to 30.9 M)

Cost-effectiveness of activities related to resilient food solutions

Activity

Cost to save a life ($/life)

Planning

29.3

Research

31.2

Planning, research and development

28.7

Planning, research, development and training

9.62 k

Discussion

2 views on soot injected into the stratosphere

My best guess for the soot injected into the stratosphere per countervalue yield is 2.60*10^-4 Tg/kt. I obtained this giving the same weight to results I inferred from Reisner’s and Toon’s views, but they differ by a factor of 68.3:

Consequently, if I attributed all weight to the result I deduced from Reisner’s (Toon’s) view, my estimates for the expected mortality would become 0.121 (8.27) times as large. In other words, my best guess is hundreds of millions of famine deaths due to the climatic effects, but tens of millions putting all weight in the result I deduced from Reisner’s view, and billions putting all weight in the one I deduced from Toon’s view. Further research would be helpful to figure out which view should be weighted more heavily.

Xia 2022

I calculated 392 M famine deaths due to the climatic effects of a large nuclear war for:

The results of Table 1 of Xia 2022, which are in the table below, imply:

Xia 2022

Soot injected into the stratosphere (Tg)

Total yield (Mt)

Number of people without food at the end of Year 2 (M)

Number of people without food at the end of Year 2 per soot injected into the stratosphere (M/Tg)

Number of people without food at the end of Year 2 per total yield (M/Mt)

5

1.50

255

51.0

170

16

3.75

926

57.9

247

27

12.5

1.43 k

52.8

114

37

25.0

2.08 k

56.2

83.2

47

50.0

2.51 k

53.4

50.2

150

440

5.34 k

35.6

12.1

So my famine deaths due to the climatic effects of a large nuclear war of 17.7 M/Tg (per soot injected into the stratosphere) and 0.992 M/Mt (per total yield) are 32.3 % (= 17.7/54.8) and 7.81 % (= 0.992/12.7) those of Xia 2022, which I therefore deem too pessimistic.

Luisa’s analyses

I have updated one parameter of Luisa’s nuclear winter Guesstimate model to make its results more comparable with mine. Whereas it considers a “world population, excluding Australia and New Zealand”[52], of 7.5 G, I have used 8.83 G (= 8.86*10^9*(1 - 0.00391)). I computed this from the product between:

The 5 k ordered samples are here, and have a mean of 6.69 G deaths. Luisa estimated an annual probability of 0.38 % for a nuclear war between the United States and Russia, i.e. 9.42 % (= 1 - (1 - 0.0038)^(2050 - 2024)) before 2050. Luisa does not explicitly define nuclear war, but my interpretation of the post is that it means at least one offensive nuclear detonation, which Luisa confirmed[53]. Similarly, I take Luisa’s nuclear winter post to be conditional on at least one offensive nuclear detonation in the United States or Russia, which Luisa also confirmed[54].

As a consequence, Luisa’s expected deaths before 2050 would be 630 M (= 6.69*10^9*0.0942) accounting for nuclear wars between the United States and Russia, and arguably significantly more if others are included[55]. My estimate of 12.9 M deaths is 2.05 % (= 12.9*10^6/(630*10^6)) of Luisa’s, so I would say her results are significantly pessimistic. I end up agreeing with Luisa that:

If we discounted the expected harm caused by US-Russia nuclear war for the fact that the nuclear winter hypothesis is somewhat suspect, the expected harm could shrink substantially.

I am also surprised by Luisa’s distribution for the famine death rate due to the climatic effects. Her 5th and 95th percentile are 41.0 % and 99.6 %, which I think are too close and high. According to my distribution, the probability of the famine death rate due to the climatic effects being at least 41.0 % given one offensive nuclear detonation before 2050 is 0.00718 %[56]. The probability is actually higher due to model uncertainty[57]. In any case, Luisa’s 5 % chance of a population loss greater than 41.0 %, conditional on one offensive nuclear detonation in the United States or Russia, does seem off. So much so that it prompted me to recheck her Guesstimate model.

The 5th percentile death rate is 41.1 % (= 3.63/8.83), which checks out. I guess this super pessimistic result has gone unnoticed because people think “US-Russia nuclear exchange” refers to thousands of detonations, but it is only supposed to refer to at least one.

Michael’s analysis

Mike says that:

If firestorms do occur in any serious numbers, for example in half of cases as with the historical atomic bombings, a nuclear winter is still a real threat. Even assuming lower fuel loads and combustion, you might get 3 degrees centigrade cooling from 750 detonations; you do not need to assume every weapon leads to a firestorm to be seriously concerned.

However, the above, which is illustrated in Mike’s graph below, only holds under Toon’s view, not Reisner’s. As I discussed, the 2nd simulation of Reisner 2019 has high fuel load, and produces a firestorm, but results in basically the same fraction of emitted soot being injected into the stratosphere in the 1st 40 min as the simulations of Reisner 2018, which have low fuel load, and did not produce firestorms. The soot injected into the stratosphere per countervalue yield I inferred from Toon’s view is 68.3 times the one I deduced from Reisner’s view, and I think one should give some weight to both.

Having in mind the graph above, Mike says:

To stress, this argument [“nuclear winter is still a real threat”] isn’t just drawing two lines at the high/low estimates, drawing one between them and saying that is the reasonable answer. This is an argument that any significant targeting of cities (for example 250+ detonations) with high yield strategic weaponry presents a serious risk of a climate shock, if at least some of them cause firestorms.

Since the above is only true under Toon’s view, I believe Mike is in effect drawing a line (in light red and orange) between the bottom and top lines (in yellow and dark red), thus underweighting Reisner’s view. Giving the same weight to Toon’s and Reisner’s view implies drawing a line between the bottom and top lines, but not on a linear scale as above. Since the results I deduced for the views differ by 2 orders of magnitude, I think one should draw that line on a logarithmic scale, i.e. combine the views using the geometric mean instead of the mean, as I did.

One may argue the geometric mean is not adequate based on the following. If the soot injected into the stratosphere per countervalue yield I deduced from Reisner’s and Toon’s view respects the 5th and 95th percentile of a lognormal distribution, the geometric mean is the median of the distribution, but what matters is its mean. This would be 5.93*10^-4 Tg/kt, i.e. 2.28 (= 5.93*10^-4/(2.60*10^-4)) times my best guess. I did not follow this approach because:

I guess it is better to treat the results I inferred from Reisner’s and Toon’s view as random samples of a lognormal distribution, as opposed to matching them to specific quantiles. I used the geometric mean, which is the MLE of the median of a lognormal distribution[18].

Note that, before getting my best guess using the geometric mean, I adjusted Reisner’s and Toon’s view based on my available fuel per area for countervalue nuclear detonations, and Reisner’s view for the emitted soot per burned fuel. I ultimately obtained famine deaths due to the climatic effects of a large nuclear war per total yield 7.81 % of those of Xia 2022, which relies on Toon’s view.

I also noted linearly extrapolating the top line of Mike’s graph would lead to 30 Tg for 0 detonations. In reality, there would be 0 Tg for 0 detonations, so one cannot linearly extrapolate. The reason is that, under Toon’s view, the soot injected into the stratosphere increases sublinearly for few detonations, as illustrated in the figure here. This is because Toon 2008:

Assumed regions were targeted in decreasing order of population [and therefore soot injected into the stratosphere] within 5.25 km of ground zero

I do not endorse this assumption.

Comparison with direct deaths

My analysis does not cover direct deaths, but I guess they would be 337 M (= (164 + (360 - 164)/(440 - 50)*(395 - 50))*10^6) in a large nuclear war:

I expect 392 M famine deaths due to the climatic effects of a large nuclear war, which suggests these would be 1.16 (= 392*10^6/(337*10^6)) times the direct deaths. So I disagree with Bean that:

All available data suggests it [“climatic impact”] would be dwarfed by the direct (and very bad) impacts of the nuclear war itself.

Putting all weight in the soot injected into the stratosphere per countervalue yield I deduced from Reisner’s or Toon’s view, the famine deaths due to the climatic effects would be 14.0 % (= 1.16*0.121) or 9.59 (= 1.16*8.27) times the direct deaths. In other words, my best guess is that famine deaths due to the climatic effects are within the same order of magnitude of the direct deaths, but 1 order of magnitude lower putting all weight in the result I inferred from Reisner’s view, and 1 higher putting all weight in the one I inferred from Toon’s view.

Cost-effectiveness of activities related to resilient food solutions

Nearterm perspective

The median cost to save a life among the 4 GiveWell’s top charities is 5 k$/life. The ratio between this and those linked to the activities related to resilient food solutions is:

This suggests planning, research and development related to resilient food solutions is 2 (= log10(174)) orders of magnitude more cost-effective than GiveWell’s top charities. The above results are based on my estimates for the expected famine deaths due to the climatic effects of nuclear war, and the guesses provided in Denkenberger 2016 for the cost and effectiveness of activities related to resilient food solutions. Their cost-effectiveness would tend to be higher due to also decreasing deaths from other severe food shocks, such as those resulting from abrupt climate change, engineered crop pathogens, or other abrupt sunlight reduction scenarios (ASRSs), namely volcanic or impact winters.

On the other hand, I suspect the values from Denkenberger 2016 are very optimistic, such that I am greatly overestimating the cost-effectiveness. My reasons for this are similar to the ones given by Joel Tan in the context of concluding arsenal limitation is 5 k times as effective as GiveWell’s top charities:

The headline cost-effectiveness will almost certainly fall if this cause area is subjected to deeper research: (a) this is empirically the case, from past experience; and (b) theoretically, we suffer from optimizer's curse (where causes appear better than the mean partly because they are genuinely more cost-effective but also partly because of random error favouring them, and when deeper research fixes the latter, the estimated cost-effectiveness falls). As it happens, CEARCH intends to perform deeper research in this area, given that the headline cost-effectiveness meets our threshold of 10x that of a GiveWell top charity.

I guess the true cost-effectiveness of planning, research and development related to resilient food solutions is 2 orders or magnitude lower than I estimated, i.e. within the same order of magnitude of that of GiveWell’s top charities. Consequently, instead of expecting these 3 activities to reduce famine deaths at 0.379 %/M$ (= 0.264/(69.4*10^6)), as suggested by Denkenberger 2016, I think their effectiveness to cost ratio is more like 0.00379 %/M$. Note this adjustment is not resilient.

Furthermore, I have argued corporate campaigns for chicken welfare are 1.71 k times as cost-effective as GiveWell’s top charities, i.e. 3 orders of magnitude more cost-effective. If so, such campaigns would also be 3 orders of magnitude more cost-effective than activities related to resilient food solutions.

Longterm perspective

I am open to the idea that nuclear war can have longterm implications. As William MacAskill’s argued on The 80,000 Hours Podcast:

It’s quite plausible, actually, when we look to the very long-term future, that that’s [whether artificial general intelligence is developed in “liberal democracies” or “in some dictatorship or authoritarian state”] the biggest deal when it comes to a nuclear war: the impact of nuclear war and the distribution of values for the civilisation that returns from that, rather than on the chance of extinction [which is very low].

Nonetheless, I believe it would be a surprising and suspicious convergence if broadly decreasing starvation due to the climatic effects of nuclear war was among the most cost-effective interventions to increase democracy levels, or positively shape the development of transformative artificial intelligence (TAI). At least a priori:

For these reasons, I think activities related to resilient food solutions are not cost-effective at increasing the longterm value of the future, neither via decreasing the risk of human extinction[59], nor improving the values of TAI. By not cost-effective, I mostly mean I do not see those activities being competitive with the best opportunities to decrease AI risk, and improve biosecurity and pandemic preparedness at the margin, like Long-Term Future Fund’s marginal grants.

As another factor informing my view, I conclude in the next section that the expected importance of accelerating economic growth via decreasing famine deaths due to the climatic effects of nuclear war decreases with mortality[60]. Some important caveats:

Rapid diminution of the longterm value of accelerating economic growth

Under my assumptions, the longterm value of accelerating economic growth via decreasing deaths due to the climatic effect of nuclear war presents what I think David Thorstad calls rapid diminution. In essence, the right tail of the probability density function (PDF) of the famine death rate due to the climatic effects decays much faster than the growth in the longterm value of saving lives due to accelerating economic growth, hence the expected value of saving lives for higher famine death rate due to the climatic effects also decreases. To illustrate, the 90th, 99th and 99.9th percentile famine deaths due to the climatic effects of a large nuclear war have:

Therefore improving worst case outcomes does not appear to be the driver of the overall expected value. In addition, my expected famine death rate due to the climatic effects of 4.43 % corresponds to the 66.8th percentile outcome of a large nuclear war[66]. These suggest maximising the number of (expected) lives saved is a better proxy for maximising longterm value due to accelerating economic growth than the heuristic of minimising the probability of a given population loss[67].

Relatedly, there is a case for longtermists to use standard cost-benefit analyses in the political sphere. Denkenberger 2016 and Denkenberger 2018 are examples of following such an approach in the context of activities related to resilient food solutions.

For reference, improving worst case outcomes is also not the driver of the longterm value of accelerating economic growth based on Luisa’s results. Her expected famine death rate due to the climatic effects of 75.5 % matches the 47.1th percentile outcome given at least one offensive nuclear detonation in the United States or Russia, and there is rapid diminution too. Her 90th, 99th and 99.9th percentile deaths have:

I see some potential red flags above. I expected:

Left tails

It is often hard to find interventions which are robustly beneficial. In my mind, decreasing the famine deaths due to the climatic effects of nuclear war is no exception, and I think it is unclear whether that is beneficial or harmful from both a nearterm and longterm perspective.

The benevolence, intelligence, and power (BIP) framework suggests how saving human lives may not be sufficient for an intervention to be beneficial. According to it:

It’s likely good to:

  1. Increase actors’ benevolence.
  2. Increase the intelligence of actors who are sufficiently benevolent
  3. Increase the power of actors who are sufficiently benevolent and intelligent

And that it may be bad to:

  1. Increase the intelligence of actors who aren’t sufficiently benevolent
  2. Increase the power of actors who aren’t sufficiently benevolent and intelligent

I see saving human lives, and the capability approach to human welfare more broadly, as mostly about increasing power, which goes to 0 if one dies. However, I am not confident increasing power in an untargeted way is good. I must emphasise not saving lives has drastically different consequences from killing people, which is much more anti-cooperative. I strongly oppose killing people, including via nuclear war[69].

All things considered, my intuition is that at the margin it would be good if interventions which are mainly cost-effective at saving lives, not at increasing longterm value, focussed more on actively minimising harmful effects on animals, and ensuring beneficial longterm effects.

Nearterm perspective

From a nearterm perspective, I am concerned with the meat-eater problem, and believe it can be a crucial consideration. The people whose lives were saved thanks to resilient food solutions would go on to eat factory-farmed animals, which may well have sufficiently bad lives for the decrease in human mortality to cause net suffering. In fact, net global welfare may be negative and declining.

I estimated the annual welfare of all farmed animals combined is -12.0 times that of all humans combined[70], which suggests not saving a random human life might be good (-12 < -1). Nonetheless, my estimate is not resilient, so I am mostly agnostic with respect to saving random human lives. There is also a potentially dominant beneficial/harmful effect on wild animals.

Accordingly, I am uncertain about whether decreasing famine deaths due to the climatic effects of nuclear war would be beneficial or harmful. I think the answer would depend on the country, with saving lives being more beneficial in (usually low income) countries with lower consumption per capita of farmed animals with bad lives. I calculated the cost-effectiveness of saving lives in the countries targeted by GiveWell’s top charities only decreases by 22.4 % accounting for negative effects on farmed animals, which means it would still be beneficial (0.224 < 1).

Some hopes would be:

Bear in mind price-, taste-, and convenience-competitive plant-based meat would not currently replace meat.

Another downside I am not too worried about is the moral hazard of preparing for the climatic effects of nuclear war. This would tend to increase the probability of a large nuclear war, and number of offensive nuclear detonations conditional on its occurrence. In the survey (S) and Anders Sandberg’s (E) model of Denkenberger 2022, it is guessed such hazard would only decrease longterm cost-effectiveness by 4 % and 0.4 % for a full scale nuclear war, and 2 % and 0.04 % for a 10 % agricultural shortfall, thus not making preparation harmful. I intuitively agree the moral hazard would not be a major effect. Nonetheless, I welcome further research like that of Ingram 2023, which investigated the public awareness of nuclear winter, and its implication for escalation control[73].

Longterm perspective

It is somewhat unclear to me whether generally mitigating the food shocks caused by nuclear war would change values for the better. I concluded it would in expectation if they were fully mitigated everywhere, but that there would still be a 1/3 chance of an overall negative effect in that case[74]. More importantly, nationally mitigating food shocks would be harmful not only in pessimistic cases, but also in expectation in 40.7 % (= 59/145) of the countries I analysed. “All results should be taken with a big grain of salt, as they rely on quite speculative assumptions”, but I would still say the sign of the longterm impact is unclear.

It also looks like there is a potential trade-off between maximising nearterm and longterm effects. Saving lives in low income countries is tendentially cheaper, and consumption per capita of animals with bad lives is lower there. Nonetheless, to the extent GDP per capita is a good proxy for influence per person on the longterm future, targeting high income countries may be better if reducing famine there does lead to sufficiently better democracy levels or TAI, and is sufficiently cheap.

Nevertheless, resilient food solutions potentially having a beneficial impact on the longterm future via would not automatically render the uncertainty around the nearterm effects irrelevant. Although I subscribe to expectational total hedonistic utilitarianism, and agree the expected value of the future is way higher than that of this century[75], interventions usually do not differ astronomically in expected cost-effectiveness:

My personal recommendations for funders

I encourage funders who have been supporting efforts to decrease nuclear risk (improving prevention, response or resilience) to do the following. If they aim to:

These are my personal recommendations at the margin. I am not arguing for interventions decreasing nuclear risk to receive zero resources, nor for all these to be funded via Longview’s Nuclear Weapons Policy Fund.

I agree with Giving What We Can’s recommendation for most people to donate to expert-managed funds, and have not recommended any specific organisations above.

Acknowledgements

Thanks to Anonymous Person 1, Anonymous Person 2, Anonymous Person 3, Anonymous Person 4, Anonymous Person 5, Anonymous Person 6, Anonymous Person 7, Anonymous Person 8, Anonymous Person 9, Fin Moorhouse, Stan Pinsent and Stephen Clare for feedback on the draft[78]. Thanks to GPT-4 for: coding the Colab to calculate the parameters of a beta distribution given 2 quantiles, and the Colab to obtain the parameters of a beta distribution from its mean and ratio between 2 quantiles; explaining how to estimate the ratio between the 95th and 5th percentile of the product of independent lognormal distributions given the ratios between the 95th and 5th percentile of the various factors; and feedback on the draft.

  1. ^

     1 Tg equals 1 Mt.

  2. ^

     1 G means 1 billion.

  3. ^

     Nonetheless, Luisa acknowledges that (see next section):

    If we discounted the expected harm caused by US-Russia nuclear war for the fact that the nuclear winter hypothesis is somewhat suspect, the expected harm could shrink substantially.

  4. ^

     David Denkenberger commented:

    Though this is true, my analysis had assumptions between the extremes.

  5. ^

     I presume all the soot comes from the same nuclear war.

  6. ^

     In all the simulations, the soot is arbitrarily injected during the week starting on May 15 of Year 1.

  7. ^

     â€œThis question will resolve as Yes if there is any nuclear detonation as an act of war between January 1, 2020 and January 1, 2050. Resolution will be by credible media reports. The detonation must be deliberate; accidental, inadvertent, or testing/peaceful detonations will not qualify (see fine print). Attacks using strategic and tactical nuclear weapons are both sufficient to qualify”. I assume the detonations can be by both state and non-state actors, as nothing is said otherwise.

  8. ^

     Luisa does not explicitly define nuclear war, but my interpretation of the post is that it means at least one offensive nuclear detonation. Luisa confirmed. “Yes, I was considering just 1 nuclear detonation”.

  9. ^

     Such that the beta distribution has minimum 0.

  10. ^

     Assuming the annual probability of one offensive nuclear detonation does not change before 2050, and that one such detonation does occur before 2050, it is expected to happen 13 years (= 2037 - 2024) from now.

  11. ^

     Metaculus’ community predictions for 2032 and 2052 approximately follow a normal distribution, whose mean can be computed from the mean between the 25th and 75th percentiles. As a side note, Metaculus’ 90th percentile community predictions for 2032, 2052 and 2122 are 12 k, 21 k, and 40 k. These point towards dramatic order of magnitude increases in nuclear warheads being unlikely.

  12. ^

     Calculated here from 1 - beta.cdf(0.113, alpha, beta_).

  13. ^

     Jeffrey Lewis clarified on The 80,000 Hours Podcast there is not a sharp distinction between counterforce and countervalue:

    And so just to explain that a little bit, or unpack that: if you look at what the United States says about its nuclear weapons today, we are explicit that we target things that the enemy values, and we are also explicit that we follow certain interpretations of the law of armed conflict. And it is absolutely clear in those legal writings that the United States does not target civilians intentionally, but that in conducting what you might call “counterforce,” there is a list of permissible targets. And they include not just nuclear forces. I think often in the EA community, people assume counterforce means nuclear forces, because it’s got the word “force,” right? But it’s not true. So traditionally, the US targets nuclear forces and all of the supporting infrastructure — including command and control, it targets leadership, it targets other military forces, and it targets what used to be called “war-supporting industries,” but now are called “war-sustaining industries.”

  14. ^

     The green line in the 3rd subfigure is 0 above the dashed black line marking the start of the stratosphere.

  15. ^

     Calculated here via beta.ppf(“quantile (0.05, 0.5 or 0.95)”, alpha, beta_). The 5th percentile might look strangely low, but I think it is fine. A null value would only mean at least 5 % chance of no more offensive nuclear detonations after the 1st one.

  16. ^

     Mean between the lowest and highest values shown on the graph of the CDF of Metaculus’ predictions for the 50th percentile.

  17. ^

     Mean between the lowest and highest values shown on the graph of the CDF of Metaculus’ predictions for the 90th percentile.

  18. ^

     For the same reasons that the mean is the maximum likelihood estimator (MLE) of the mean of a normal distribution, the geometric mean is the MLE of the median of a lognormal distribution, which I think describes the estimates well. There is a large difference between them (otherwise I would have considered a normal distribution), and they are not limited to range from 0 to 1 (otherwise I would have used a beta distribution).

  19. ^

     The mean yield to the power of 2/3 is 30.2 kt^(2/3) (= (600*335^(2/3) + 200*300^(2/3) + 1511*90^(2/3) + 25*8^(2/3) + 384*455^(2/3) + 500*(5^(2/3)*150^(2/3))^0.5 + 288*400^(2/3) + 200*(0.3^(2/3)*170^(2/3))^0.5)/3708).

  20. ^

     From Nukemap:

    At 5 psi overpressure, most residential buildings collapse, injuries are universal, fatalities are widespread. The chances of a fire starting in commercial and residential damage are high, and buildings so damaged are at high risk of spreading fire. Often used as a benchmark for moderate damage in cities. Optimal height of burst to maximize this effect is 1,830 m.

  21. ^

     The mean is the MLE of the mean of a normal distribution, which I think describes the estimates well. There is not a large difference between them (otherwise I would have considered a lognormal distribution), and they are not limited to range from 0 to 1 (otherwise I would have used a beta distribution).

  22. ^

     Denkenberger 2018 argues the above quantiles are a reflection of Turco 1990. I agree. From the emitted soot and burned fuel of 105 and 5,075 Tg given in Table 2 of Turco 1990, one infers an emitted soot per available fuel of 2.07 % (= 105/5075), which is very similar to 2.13 %.

  23. ^

     Reisner 2018 notes that:

    Although FIRETEC does not presently include this capability, it does have the ability to simulate combustion of fuel and fire spread th[r]ough heat transfer, while other fire-modeling tools, such as WRF-FIRE (Coen et al., 2013) [used in Wagman 2020], employ prescribed fire spread approximations typically based on wind speed and direction.

    There is ongoing work to upgrade the models of Reisner 2018 to integrate chemical combustion modelling of soot production. From Hess 2021:

    Jon Reisner gave a seminar at the National Center for Atmospheric Research on 12 November 2019 in which he discussed the need to reduce the uncertainties and appealed to the community for help to do this (Reisner 2019). Work is underway at LANL [Los Alamos National Laboratory] to upgrade HIGRAD-FIRETEC to run faster, and to include detailed chemical kinetics (the formation of black carbon), probability density functions for the mean temperature and its variation within a grid cell, pyro-cumulus formation and the release of latent heat. Validation tests with other fire models and field data are being carried out, as well as tests on modern building materials to see if they will burn.

  24. ^

     The tropopause can be between 9 and 17 km, which encompass both Reisner 2018’s 12 km and Wagman 2020’s 16.6 km, so there is not necessarily a contradiction. Nevertheless, I suspect these studies are using different definitions of the tropopause. I would have expected the soot injected into the stratosphere to be the most relevant proxy for the climatic effects, and the fraction of emitted soot being injected into the stratosphere of Wagman 2020 to be higher than that of Reisner 2018. Nonetheless, eyeballing the 3rd subfigure of Figure 4 of Wagman 2020, it looks like less than 10 % of emitted soot is injected into the stratosphere for a fuel load of 16 g/cm^2 (see area between the vertical axis and the black line), which is less than the 21.1 % implied by Reisner 2018.

  25. ^

     I contacted Jon Reisner, the 1st author of Reisner 2018 and Reisner 2019, on October 11 to get confirmation, and had already asked for feedback on the draft on September 22, but have not heard back.

  26. ^

     We adopt a baseline value for the rainout parameter, R (the fraction of the smoke emission not removed), of 0.8 [= 1 - 0.20], following Turco et al. (1990).

  27. ^

     Thanks to Brian Toon for clarifying this.

  28. ^

     Thanks to Bean for suggesting I looked into this.

  29. ^

     Urban Fires and Trends:

    - Early 20th Century: The early 1900s, especially before the 1940s, witnessed significant urban fires. Factors like wooden constructions, crowded urban spaces, and inadequate firefighting equipment and techniques contributed. The 1906 reference you mentioned might be related to the famous San Francisco earthquake and subsequent fires. Many cities during this era suffered large fires, prompting a push for better urban planning and fire safety.

    - Mid 20th Century: With the advent of modern building materials and techniques, fires decreased in frequency. The establishment of national fire codes and standards, and the professionalisation of firefighting, also played a significant role.

    - Late 20th Century to Present: Continued advancements in fire detection (like smoke alarms) and suppression systems (like sprinklers), coupled with public awareness campaigns, have further reduced urban fires. However, while the number of fires has generally decreased, the economic damage per fire incident (adjusted for inflation) might have increased due to the value of modern urban infrastructure.

  30. ^

     For reference, Metaculus defines countervalue as follows:

    A detonation will be considered "countervalue" if credible media reporting does not widely consider a military or industrial target as the primary target of the detonation (except for detonations on capital cities, which will always be considered countervalue without exception).

  31. ^

     Note the fraction of counterforce nuclear detonations by a country equals 1 minus the fraction of countervalue nuclear detonations by that country. The weights add up to 3.44 (= 0.492 + 0.675 + 0.921 + 0.492 + 0.860), but this being higher than 1 is not a red flag. What has to sum to less than 1 are the counterforce detonations, as a fraction of the total counterforce detonations, which are detonated in each country, not the counterforce detonations by each country as a fraction of their offensive detonations. Since I considered 5 countries, the sum of the weights only has to add up to less than 5, and it does (3.46 < 5).

  32. ^

     Last year for which The World Bank has data on urban land area.

  33. ^

     The mean weight of 11.2 % (= (0.00770 + 0.325 + 0.079 + 0.00770 + 0.140)/5) being 52.1 % (= 0.112/0.215) of the fraction I supposed for the offensive nuclear detonations which will be countervalue suggests only half of them will be in the 5 aforementioned countries. I guess more that this will, in which case Metaculus’ community predictions may not be internally consistent, but there might be many detonations in other countries too. Alternatively, it may be that offensive nuclear detonations by each of the 5 countries will be significantly different. In any case, none of these potential sources of error lead in an obvious way to underestimating/overestimating the fuel load, as it is a weighted mean. The potential error is also very much bounded, as the lowest and highest fuel loads are 0.427 (= 7.08/16.6) and 1.64 (= 27.3/16.6) times my estimate of 16.6 g/cm^2.

  34. ^

     â€œWe use the LandScan (2003) population density database as a fuel-loading database”.

  35. ^

     Year closest to 2003 for which The World Bank has data on urban land area.

  36. ^

     â€œFor a 15-kt explosion [what was analysed], we assume the fire zone area is equal to that of the Hiroshima firestorm – 13 km2 – ignited by a weapon of about the same yield”.

  37. ^

     Arguably a good model if the countervalue detonations target city centres.

  38. ^

  39. ^

     I obtained high precision based on the pixel coordinates of the relevant points, which I retrieved with Paint.

  40. ^

     I suppose the e-folding time of stratospheric soot does not depend on the initial amount of soot.

  41. ^

  42. ^

     These numbers underestimate the death toll linked to undernutrition and micronutrient deficiencies. Ahmed 2013 says these “are responsible directly or indirectly for more than 50% of all under-5 deaths globally”. Given 5.02 M under-5 deaths in 2021, it sounds like more than 2.51 M (= 0.5*5.02*10^6) under-5 deaths are connected to undernutrition and micronutrient deficiencies, i.e. at least 5.41 (= 2.51/0.464) times the 464 M (= (252 + 212)*10^3) deaths caused by nutritional deficiencies and protein-energy malnutrition in 2019.

  43. ^

     Assuming such meat comes from farmed animals.

  44. ^

     According to Open Philanthropy:

    GiveWell uses moral weights for child deaths that would be consistent with assuming 51 years of foregone life in the DALY framework (though that is not how they reach the conclusion).

  45. ^

     Given 2 lognormal distributions X_1 and X_2, and Y = X_1 X_2, the ratio between the 95th and 5th percentile of Y is e^((ln(r_1)^2 + ln(r_2)^2)^0.5), where r_1 and r_2 are the ratios between the 95th and 5th percentile of X_1 and X_2. To explain, if there is a probability of p_1 and p_2 that ln(X_i) is no larger than ln(x_i_1) and ln(x_i_2), the z-scores of these are z_1 = (ln(x_i_1) - E(ln(X_i)))/V(ln(X_i))^0.5 and z_2 = (ln(x_i_2) - E(ln(X_i)))/V(ln(X_i))^0.5. Consequently, z_2 - z_1 = (ln(x_i_2) - ln(x_i_1))/V(ln(X_i))^0.5, i.e. V(ln(X_i)) = (ln(x_i_2/x_i_1)/(z_2 - z_1))^2. Since the sum of 2 independent normal distributions is also normal, Y = X_1 X_2 is lognormal. So, if there is also a probability of p_1 and p_2 that ln(Y) is no larger than ln(y_1) and ln(y_2), V(ln(Y)) = (ln(y_2/y_1)/(z_2 - z_1))^2. Since V(Y) = V(X_1) + V(X_2) if X_1 and X_2 are independent, denoting by r_i the ratio between x_i_2 and x_i_1, (ln(y_2/y_1)/(z_2 - z_1))^2 = (ln(r_1)/(z_2 - z_1))^2 + (ln(r_2)/(z_2 - z_1))^2, i.e. y_2/y_1 = e^((ln(r_1)^2 + ln(r_2)^2)^0.5). As a side note, if Y = X_1 X_2 … X_N, and r_i = r, y_2/y_1 = r^(N^0.5).

  46. ^

     â€œProbability of having more than N deaths before 2050” = “probability of at least one offensive nuclear detonation before 2050”*“probability of having more than N deaths before 2050 given at least one offensive nuclear detonation before 2050” => 1 - “quantile of N deaths before 2050” = “probability of at least one offensive nuclear detonation before 2050”*(1 - “quantile of N deaths given at least one offensive nuclear detonation before 2050”) <=> “quantile of N deaths given at least one offensive nuclear detonation before 2050” = 1 - (1 - “quantile of N deaths before 2050”)/“probability of at least one offensive nuclear detonation before 2050”.

  47. ^

     Calculated here via beta.ppf(0.844, alpha, beta_).

  48. ^

     Because E(“cost-effectiveness”) = E(“lives saved”/“cost”) = E(“lives saved”)/(1/E(1/“cost”)) if lives saved and cost are independent, as assumed in Denkenberger 2016.

  49. ^

     David confirmed it should be research.

  50. ^

     Ideally, I should have relied on healthy life expectancy at the mean age (not median), but I did not easily find data for it.

  51. ^

     Ideally, I should have focussed on healthy life expectancy at 33.4 years old (median age projected for 2037), but I did not easily find data for global healthy life expectancy at adult ages.

  52. ^

     From Table S2 of Xia 2022, calorie production in Australia “from the major food crops (maize, rice, soybean and spring wheat) and marine fish in Year 2” for 150 Tg of soot injected into the stratosphere would be 24.2 % higher than without any soot. This illustrates the comparatively high resilience of Australia against abrupt sunlight reduction scenarios.

  53. ^

     Yes, I was considering just 1 nuclear detonation.

  54. ^

     Me: “Is this post also conditional on at least one offensive nuclear detonation in the US or Russia?”. Luisa: “Yes”.

  55. ^

     Luisa attributed an expected harm of 12 (on her scale) to nuclear wars between not only NATO (including the United States) and Russia, but also India and Pakistan. The expected harm was calculated from the sum of 5 factors, each ranging from 1 to 3, number of nuclear warheads of country 1 and 2, population of countries 1 and 2, and median probability of nuclear war between country 1 and 2 over the next 20 years.

  56. ^

     Calculated here from 0.103*(1 - beta.cdf(0.410, alpha, beta_)).

  57. ^

     In addition, I am overstating the difference between mine and Luisa’s results because her estimates are conditional on at least one offensive nuclear detonation in the United States or Russia, which arguably respects higher escalation potential than at least one offensive nuclear detonation globally (what I considered).

  58. ^

     Calculated here from 0.0330*(1 - beta.cdf(0.5, alpha, beta_)).

  59. ^

     Including by decreasing the risk of civilisational collapse.

  60. ^

     By accelerating economic growth, I mean increasing longterm cumulative economic output.

  61. ^

     Calculated here via beta.ppf(“quantile (0.5, 0.9, 0.99 or 0.999)”, alpha, beta_).

  62. ^

     If this is the case, the longterm value of saving a life after a population loss of 90 % is 10 times that of doing it now, and so on. Consequently, the decrease in longterm value due to lost economic output for a certain population loss  is proportional to . In other words, going from 8 billion people to 800 million is as bad as going from that to 80 million, and so on. Analogously, marginal increases in wealth leading to marginal increases in welfare which are inversely proportional to wealth (and proportional to the increase in wealth) implies that going from 1 k$/year to 10 k$/year is as good as going from 10k$/year to 100 k$/year. If roughly all longterm value is lost in the process of going from 8 billion to 800 people, there would be an absolute reduction of 1/7 (= 1/log10(8*10^9/800)) of the initial longterm value for each decrease by a factor of 10 of the population. So 90 %, 99 % and 99.9 % population losses would imply a decrease in longterm value of 14.3 % (= 1/7), 28.6 % (= 2/7), and 42.9 % (= 3/7). The assumption of the longterm value of saving lives being inversely proportional to population size is informed by the following passage of Carl Shulman’s post on the flow-through effects of saving a life:

    For example, suppose one saved a drowning child 10,000 years ago, when the human population was estimated to be only in the millions. For convenience, we'll posit a little over 7 million, 1/1000th of the current population. Since the child would add to population pressures on food supplies and disease risk, the effective population/economic boost could range from a fraction of a lifetime to a couple of lifetimes (via children), depending on the frequency of famine conditions. Famines were not annual and population fluctuated on a time scale of decades, so I will use 20 years of additional life expectancy.

    So, for ~ 20 years the ancient population would be 1/7,000,000th greater, and economic output/technological advance. We might cut this to 1/10,000,000 to reflect reduced availability of other inputs, although increasing returns could cut the other way. Using 1/10,000,000 cumulative world economic output would reach the same point ~ 1/500,000th of a year faster. An extra 1/500,000th of a year with around our current population of ~7 billion would amount to an additional ~14,000 life -years, 700 times the contemporary increase in life years lived. Moreover, those extra lives on average have a higher standard of living than their ancient counterparts.

    Readers familiar with Nick Bostrom's paper on astronomical waste will see that this is a historical version of the same logic: when future populations will be far larger, expediting that process even slightly can affect the existence of many people. We cut off our analysis with current populations, but the greater the population this growth process will reach, the greater long-run impact of technological speedup from saving ancient lives.

  63. ^

     In reality, the longterm value of saving lives due to accelerating economic growth is also proportional to the longterm annual value. This would presumably decrease for higher famine death rate due to the climatic effects, since full recovery is not guaranteed, so I am overestimating the value of accelerating growth.

  64. ^

     Calculated here via beta.pdf(“90th/99th/99.9th famine death rate due to the climatic effects”, alpha, beta_)/beta.pdf(“median famine death rate due to the climatic effects”, alpha, beta_).

  65. ^

     Probability density times value.

  66. ^

     Calculated here from beta.cdf(0.0443, alpha, beta_).

  67. ^

     If the expected value density of saving an additional life increased with mortality, improving worst case outcomes would be a comparatively better proxy for maximising the overall expected value of improving the longterm future via accelerating economic growth, and therefore the maxipok rule would be more applicable.

  68. ^

     Calculated from the data here taking the derivative of the famine death rate due to the climatic effects with respect to the quantile. For example, to obtain the PDF for the 90th percentile deaths, I used (“90.01th percentile famine death rate due to the climatic effects” - “89.99th percentile famine death rate due to the climatic effects”)/(0.9001 - 0.8999).

  69. ^

     I am against violence to the point that I wonder whether it would be good to not only stop militarily supporting Ukraine, but also impose economic sanctions on it proportional to the deaths in the Russo-Ukrainian War. I guess supporting Ukrainian nonviolent civil resistance in the face of war might be better to minimise both nearterm and longterm war deaths globally, although I have barely thought about this. If you judge my views on this to be super wrong, please beware the horn effect before taking conclusions about other points I have made.

  70. ^

     My number is based on the conditions of broilers in a reformed scenario.

  71. ^

     I still believe it would be desirable to eventually stop factory-farming. Even if the animal lives had become good, there would arguably be more effective ways of increasing welfare.

  72. ^

     Animals are not an efficient way of producing food. Consequently, to increase food supply, their consumption would be reduced, and animal feed directed to humans.

  73. ^

     I shared my thoughts on the study.

  74. ^

     This is essentially because, in my model, I assume a 25 % chance the future would be worse for higher socioeconomic indices (details here).

  75. ^

     It could be worth as much as the equivalent of 10^54 human lives according to Table 1 of Newberry 2021.

  76. ^

     Famines tend to happen in low income countries (see chart here).

  77. ^

     Nevertheless, current spending may overestimate the neglectedness of decreasing starvation famine deaths due to the climatic effects of nuclear war.

  78. ^

     The names are ordered alphabetically.


Denkenberger @ 2023-10-15T05:48 (+12)

I thought this was comprehensive, and it was clever how you avoided doing a Monte Carlo simulation for most of the variables. The expected amount of soot to the stratosphere was similar to my and Luisa's numbers for a large-scale nuclear war. So the main discrepancies are the expected number of fatalities and the impact on the long-term future.

From Figure 4 of Wagman 2020, the soot injected into the stratosphere for an available fuel per area of 5 g/cm^2 is negligible[14].

At 5 g/cm^2, Still most of soot makes it into the upper troposphere, so I think much of that would eventually go to the stratosphere. Furthermore, forest fires are typically less than 5 g/cm^2, and they are moving front fires rather than firestorms, and yet still some of the soot makes it into the stratosphere. In addition, some counter value targets would be in cities with higher g/cm^2. Since you found the counterforce detonations were ~4x as numerous, 1/7 the fuel loading, and if the soot to stratosphere percent was 1/3x, that would be ~20% as much soot to stratosphere as the countervalue. 


From Fig. 5b of Xia 2022, for the case in which there is no international food trade, all livestock grain is fed to humans, and there is no food waste (top line), adjusted to include international food trade dividing by 94.8 % food support for no international food trade nor climatic effects, there are no deaths for 10.5 Tg[39]. I guess the societal response will have an effect equivalent to assuming international food trade, all livestock grain being fed to humans, and no food waste (see next section), so I supposed the famine deaths due to the climatic effects are negligible up to the climate change induced by 10.5 Tg of soot being injected into the stratosphere in Xia 2022.

…

Nevertheless, I am not trying to estimate all famine deaths. I am only attempting to arrive at the famine deaths due to the climatic effects, not those resulting directly or indirectly from infrastructure destruction. I expect this will cause substantial disruptions to international food trade.

I do think there will be significant disruptions in trade due to the infrastructure destruction. But I also think perhaps the majority of the disruption to food trade in particular would be due to the climate impacts on the nontarget countries, which is the majority of the food production. Furthermore, the climate impacts make the overall catastrophe significantly worse, so I think they will increase the chances significantly of the loss of nearly all trade (not just food). This is a major reason why I expect significantly higher mortality due to climate impacts.

 

This is because Toon 2008:

                    Assumed regions were targeted in decreasing order of population [and therefore                           soot injected into the stratosphere] within 5.25 km of ground zero

I do not endorse this assumption.

Why do you not endorse this for countervalue targeting?

Mitigating starvation after a population loss of 50 % does not seem that different from saving a life now, and I estimate a probability of 3.29*10^-6 of such a loss due to the climatic effects of nuclear war before 2050[58].

Your model of the long-term future impact does not incorporate potential cascading impacts associated with catastrophes, which is why you find the marginal value of saving a life in a catastrophe not very different than saving a single life with mosquito bed nets. This is probably the largest crux. With the potential for collapse of nearly all trade (not just food), I think there is potential for collapse of civilization, from which we may not recover. But even if there is not collapse of civilization, I think there's a significant chance that worse values end up in AGI.


Nonetheless, I believe it would be a surprising and suspicious convergence if broadly decreasing starvation due to the climatic effects of nuclear war was among the most cost-effective interventions to increase democracy levels, or positively shape the development of transformative artificial intelligence (TAI). 

 

I think there is a high correlation between saving lives in a catastrophe and improving the long run future. This is probably clearest in the case of reducing the probability of collapse of civilization. Though resilient foods have a longer causal chain to democracy than working directly on democracy, resilient foods are many orders of magnitude more neglected, so it seems at least plausible to me. As for TAI, resilient foods are still orders of magnitude more neglected, which is why my paper indicates they likely have higher long-term cost effectiveness compared to direct work on TAI (or competitive even if one reduced the cost effectiveness of resilient foods by 3 orders of magnitude).

bean @ 2023-10-16T12:23 (+3)

Why do you not endorse this for countervalue targeting?

Because that kind of countervalue targeting isn't a thing.  I intend to write on this more, but there tends to be a lot of equivocation here between countervalue as "nuclear weapons fired at targets which are not strictly military" and countervalue as "nuclear weapons fired to kill as many civilians as possible".  The first kind absolutely exists, although I find the countervalue framing unhelpful.  The second doesn't in a large-scale exchange, because frankly there's no world in which you aren't better off aiming those same weapons at industrial targets.  You get a greater effect on the enemy's ability to make war, and because industrial targets tend to be in cities and have a lot of people around them, you will undoubtedly kill enough civilians to accomplish whatever can be accomplished by killing civilians, and the other side knows it.  

The partial exception to this is if you're North Korea or equivalent, and don't have enough weapons to make a plausible dent in your opponent's industry.  In that case, deterrence through "we will kill a lot of your civilians" makes sense, but note that the US was pretty safely deterred by 6 weapons, which is way less than discussed here.  

Denkenberger @ 2023-10-17T02:07 (+8)

Both sides targeted civilians in WWII. Hopefully that is not the case now, but I'm not sure.

Vasco Grilo @ 2023-10-15T10:55 (+2)

Thanks for commenting, David!

The expected amount of soot to the stratosphere was similar to my and Luisa's numbers for a large-scale nuclear war.

I think this is true for your analysis (Denkenberger 2018), whose "median [soot injection into the stratosphere] is approximately 30 Tg" (and the mean is similar?). However, I do not think it holds for Luisa's post. My understanding is that Luisa expects an injection of soot into the stratosphere of 20 Tg conditional on one offensive nuclear detonation in the United States or Russia, not a large nuclear war. I expect roughly the same amount of soot (22.1 Tg) conditional on a large nuclear war (at least 1.07 k offensive nuclear detonations).

At 5 g/cm^2, Still most of soot makes it into the upper troposphere, so I think much of that would eventually go to the stratosphere. Furthermore, forest fires are typically less than 5 g/cm^2, and they are moving front fires rather than firestorms, and yet still some of the soot makes it into the stratosphere. In addition, some counter value targets would be in cities with higher g/cm^2. Since you found the counterforce detonations were ~4x as numerous, 1/7 the fuel loading, and if the soot to stratosphere percent was 1/3x, that would be ~20% as much soot to stratosphere as the countervalue.

Eyeballing the 3rd subfigure of Figure 4 of Wagman 2020, 90 % of the emitted soot is injected below:

  • 3.5 km for 1 g/cm^2.
  • 12.5 km for 5 g/cm^2.

I got a fuel load of 3.07 g/cm^2 for counterforce. Linearly interpolating between the 2 1st data points above, I would conclude 90 % of the soot emitted due to counterforce detonations is injected below 8 km (= (3.5 + 12.5)/2; this is the value for 3 g/cm^2), and only 10 % above this height. It is also worth noting that not all soot going into the upper troposphere would go on to the stratosphere. Robock 2019 assumed only half did in the context of city fires in World War II:

Because the city fires were at nighttime and did not always persist until daylight, and because some of the city fires were in the spring, with less intense sunlight, we estimate that L ["fraction lofted from the upper troposphere into the lower stratosphere"] is about 0.5

So I think the factor of 1/3 in your BOTEC should be lower, maybe 1/6? In that case, I would only be underestimating the amount of soot by 10 %, which is a small factor in the context of the large uncertainty involved (my 95th percentile famine deaths due to the climatic effects is 62.3 times my best guess). In addition, I suspect I am underestimating the amount of soot injected into the stratosphere from countervalue detonations due to assuming no overlap between their burned areas.

I do think there will be significant disruptions in trade due to the infrastructure destruction. But I also think perhaps the majority of the disruption to food trade in particular would be due to the climate impacts on the nontarget countries, which is the majority of the food production. Furthermore, the climate impacts make the overall catastrophe significantly worse, so I think they will increase the chances significantly of the loss of nearly all trade (not just food). This is a major reason why I expect significantly higher mortality due to climate impacts.

Note that I am neglecting disruptions to international food trade caused by climatic effects not just because I expect infrastructure destruction to be the major driver of the loss of trade, but also to counteract other factors:

There would be disruptions to international food trade. I only assumed it would not in order to compensate for other factors, and because I guess it would mostly be a direct or indirect consequence of infrastructure destruction, not the climatic effects I am interested in.

For reference, maintaining my famine deaths due to climatic effects negligible up to an injection of soot into the stratosphere of 11.3 Tg, if I had assumed a total loss of international food trade fully caused by the climatic effects, I would have obtained a famine death rate due to the climatic effects of a large nuclear war of 9.40 % (= 1 - (0.993 + (0.902 - 0.993)/(24.6 - 14.6)*(18.7 - 14.6))*0.948), i.e. 2.12 (= 0.0940/0.0443) times my value of 4.43 %. For arguably more reasonable assumptions of 50 % loss of international food trade, and 50 % of this loss being caused by the climatic effects, linearly interpolating, the increase in the death rate would be 25 % (= 0.5^2). So the new death rate would be 5.67 % (= 0.0443 + (0.0940 - 0.0443)*0.25), i.e. 1.28 (= 0.0567/0.0443) times my value. In reality, I think I would get a value higher than 5.67 % in this case because the minimum injection of soot into the stratosphere to cause non-negligible famine deaths due to the climatic effects would decrease to something like 8.48 Tg (= 11.3*(1 - 0.25)), which would imply more nuclear wars (the ones leading to 8.48 to 11.3 Tg of soot being injected into the stratosphere) contributing to famine deaths due to the climatic effects. However, overall, I do not think this is too important considering the large uncertainties involved in other factors, and that I am overestimatings the death rate for other reasons.

Why do you not endorse this [regions targeted by decreasing order of population] for countervalue targeting?

I have not investigated this, but my intuition is that damage would initially increase superlinearly with detonations (in line with my guess of a logistic curve). Basically, I think it is unlikely that the 1st countervalue detonation in the United States would all be hitting the metropolitan area of New York City (home to the cities in the United States with highest population density), and likewise for other countries.

Your model of the long-term future impact does not incorporate potential cascading impacts associated with catastrophes, which is why you find the marginal value of saving a life in a catastrophe not very different than saving a single life with mosquito bed nets. This is probably the largest crux. With the potential for collapse of nearly all trade (not just food), I think there is potential for collapse of civilization, from which we may not recover.

I did not explicitly model the cascade effects, but they are included in my largest contributing factor to the uncertainty of my distribution for the famine death rate due to the climatic effects:

100, which is my out of thin air guess for the ratio between the 95th and 5th percentile famine death rate due to the climatic effects for an actual (not expected) injection of soot into the stratosphere of 22.1 Tg.

If it was not for this large uncertainty, high population losses would be even less likely. On the one hand, I do not particularly trust my "out of thin air guess", so I may be underestimating the uncertainty, in which case high population losses would be more likely. On the other hand, I am wary of concluding that activities related to resilient foods are highly cost-effective from a longtermist perspective based on "out of thin air guesses". I believe David Thorstad would call that a regression to the inscrutable, and argue it often contributes towards exaggerating risks. I tend to agree.

I should note regression to the inscrutable is present not only in longtermist analyses of nuclear risk, but also AI and bio risk. However, significantly more thinking time has been invested into investigating AI, and there is more precedent for large population losses due to pandemics[1]. In addition, AI and bio catastrophes would also have cascade effects.

But even if there is not collapse of civilization, I think there's a significant chance that worse values end up in AGI.

Even if that was true (I do not know), I would expect more targeted interventions in other areas to be more cost-effective.

I think there is a high correlation between saving lives in a catastrophe and improving the long run future. This is probably clearest in the case of reducing the probability of collapse of civilization.

I believe that depends on the details of the catastrophe. Famines have been decreasing due to increased food supply, improved health, reduced poverty, democratization, and reduction in the number of children. Accordingly, I guess most famine deaths due to the climatic effects of nuclear war will be in Sub-Saharan Africa. Although my best guess is that activities to decrease these deaths (e.g. resilient food solutions) would improve the longterm value, I think there is significant uncertain for me to say it is unclear whether they are beneficial/harmful (in the same way that I say an event may happen or not happen if the probability of occurence is sufficiently far from 0 and 1). In any case, it is not sufficient to have a high correlation between improving the longterm future and decreasing famine deaths due to the climatic effects via activities related to resilient food solutions:

I do not see those activities being competitive with the best opportunities to decrease AI risk, and improve biosecurity and pandemic preparedness at the margin, like Long-Term Future Fund’s marginal grants.

I think AI and bio catastrophes can more easily involve high population losses in countries with high socioeconomic indices, so the path from decreasing their risk to improving the longterm future seems much more direct to me.

Though resilient foods have a longer causal chain to democracy than working directly on democracy, resilient foods are many orders of magnitude more neglected, so it seems at least plausible to me.

Activities related to resilient food solutions are much more neglected than general efforts to improve food security. However, "resilient democracy solutions" aiming to ensure the continuity of democracy in catastrophes would also be way more neglected than general efforts to improve democracy. To the extent resilient food solutions contribute towards a better longterm future via improving post-catastrophe democracy levels, my guess would be that resilient democracy solutions would achieve that more cost-effectively.

As for TAI, resilient foods are still orders of magnitude more neglected, which is why my paper indicates they likely have higher long-term cost effectiveness compared to direct work on TAI (or competitive even if one reduced the cost effectiveness of resilient foods by 3 orders of magnitude).

I like the paper, and quantification in general. However, I do not trust our ability to directly guess the increase in longterm value due to decreasing famine deaths due to the climatic effects. I think one has to go into the details of the causal chain. I tried to be more explicit about the path to impact, and my current interpretation of the results is that, even in expectation, it is unclear whether resilient foods are good or bad from a longterm perspective (although my best guess is that they are good, as I said above). In your model, the probability of resilient foods being harmful is 0 (although you adjust the cost-effectiveness downwards a little to account for the moral hazard of preparation). More importantly:

  1. ^

    The Black Death "is estimated to have killed 30 per cent to 60 per cent of the European population, as well as approximately 33 per cent of the population of the Middle East".

Denkenberger @ 2023-10-15T20:12 (+4)

In that case, I would only be overestimating the amount of soot by 10 %, which is a small factor in the context of the large uncertainty involved (my 95th percentile famine deaths due to the climatic effects is 62.3 times my best guess).

Do you mean underestimating? I agree that it's not that large of an effect.

For reference, maintaining my famine deaths due to climatic effects negligible up to an injection of soot into the stratosphere of 11.3 Tg, if I had assumed a total loss of international food trade fully caused by the climatic effects, I would have obtained a famine death rate due to the climatic effects of a large nuclear war of 5.78 % (= 1 - (0.993 + (0.902 - 0.993)/(24.6 - 14.6)*(14.5 - 14.6))*0.948), i.e. 1.30 (= 0.0578/0.0443) times my value of 4.43 %. For arguably more reasonable assumptions of 50 % loss of international food trade, and 50 % of it being caused by the climatic effects, linearly interpolating, the increase in the death rate would be 25 % (= 0.5^2). So the new death rate would be 4.77 % (= 0.0443 + (0.0578 - 0.0443)*0.25), i.e. 1.08 (= 0.0477/0.0443) times my value.

The total loss of international food trade would cause 5.2% of all die in Xia 2022. So it seems like attributing this all to the climactic effects would increase your death rate by 5.2 percentage points. But digging in deeper, since you are using the gray dotted line in figure 5B corresponding to no human edible food fed to animals and zero waste, if you plugged in a value of 5 Tg, you would say that that amount of soot would actually decrease mortality relative to no food trade and 0 Tg. So clearly that no trade case is not the scenario of no human edible food fed to animals and zero waste (I couldn't find quickly what exactly their assumptions were for that case). I understand that you are picking the no human edible food fed animals and zero waste scenario because you think other factors would compensate for this optimism. But I think it is particularly inappropriate for the relatively small amounts of Tg.

Vasco Grilo @ 2023-10-15T21:32 (+2)

Do you mean underestimating? I agree that it's not that large of an effect.

Thanks! I have now changed "overestimating" to "underestimating".

The total loss of international food trade would cause 5.2% of all die in Xia 2022. So it seems like attributing this all to the climactic effects would increase your death rate by 5.2 percentage points. But digging in deeper, since you are using the gray dotted line in figure 5B corresponding to no human edible food fed to animals and zero waste, if you plugged in a value of 5 Tg, you would say that that amount of soot would actually decrease mortality relative to no food trade and 0 Tg. So clearly that no trade case is not the scenario of no human edible food fed to animals and zero waste

The BOTEC related to this in my comment had an error[1]. I have now corrected it in my comment above:

For arguably more reasonable assumptions of 50 % loss of international food trade, and 50 % of it being caused by the climatic effects, linearly interpolating, the increase in the death rate would be 25 % (= 0.5^2). So the new death rate would be 5.67 % (= 0.0443 + (0.0940 - 0.0443)*0.25), i.e. 1.28 (= 0.0567/0.0443) times my value.

It is still the case that I would get a negative death rate inputting 5 Tg into my formula. However, I am linearly interpolating, and the formula is only supposed to work for a mean stratospheric soot until the end of year 2 between 14.6 and 24.6 Tg, which excludes 5 Tg. I am approximating the logistic function describing the famine deaths due to the climatic effects as being null up to an injection of soot into the stratosphere of 11.3 Tg.

I couldn't find quickly what exactly their assumptions were for that [no internationl food trade nor climatic effects] case

From the legend of Figure 5:

The blue line in b shows the percentage of population that can be supported by current food production when food production does not change but international trade is stopped.

So my interpretation is that the blue line corresponds to no livestock grain fed to humans and current household food waste (in 2010), but without international food trade. I have clarified this in the post. Ideally, instead of adjusting the top line of Figure 5b to include international food trade, I would rely on scenarios accounting for both climatic effects and no loss of international food trade, but Xia 2022 does not present results for that.

I understand that you are picking the no human edible food fed animals and zero waste scenario because you think other factors would compensate for this optimism. But I think it is particularly inappropriate for the relatively small amounts of Tg.

I am very open to different views about the famine death rate due to the climatic effects of a large nuclear war. My 95th percentile is 702 times my 5th percentile.

  1. ^

    In the expression "1 - (0.993 + (0.902 - 0.993)/(24.6 - 14.6)*(14.5 - 14.6))*0.948", 14.5 should have been 18.7. The calculation of the death rate in the post was correct, but it had the same typo in the formula, which I have now corrected.

Denkenberger @ 2023-10-16T06:09 (+4)

For arguably more reasonable assumptions of 50 % loss of international food trade, and 50 % of it being caused by the climatic effects, linearly interpolating, the increase in the death rate would be 25 % (= 0.5^2). So the new death rate would be 5.67 % (= 0.0443 + (0.0940 - 0.0443)*0.25), i.e. 1.28 (= 0.0567/0.0443) times my value.

Half of the impact of the total loss of international food trade would cause 2.6% to die according to Xia 2022. So why is it not 4.43%+2.6% = 7.0% mortality?

It is still the case that I would get a negative death rate inputting 5 Tg into my formula. However, I am linearly interpolating, and the formula is only supposed to work for a mean stratospheric soot until the end of year 2 between 14.6 and 24.6 Tg, which excludes 5 Tg. I am approximating the logistic function describing the famine deaths due to the climatic effects as being null up to an injection of soot into the stratosphere of 11.3 Tg.

I see how you avoid the negative death rate by not considering 5 Tg. However, this does not address the issue that your comparison is not fair, which is exposed by the fact that if you did put in 5 Tg, you would get negative death rate.

So my interpretation is that the blue line corresponds to no livestock grain fed to humans and current food waste (in 2010), but without international food trade.

I think that is a reasonable assumption, as then the mortality due to 5 Tg alone (no trade in both cases) is ~2% (not a reduction in mortality).

Ideally, instead of adjusting the top line of Figure 5b to include international food trade, I would rely on scenarios accounting for both climatic effects and no loss of international food trade, but Xia 2022 does not present results for that.

One logically consistent way of doing it would be taking the difference between the blue and dark red lines, because they are comparable scenarios. I agree that no reduction in waste or food fed to animals is too pessimistic, but maybe you could do sensitivity on the scenario? Because even though I think that particular scenario is unlikely, I do think that cascading risks including loss of much of nonfood trade could very well increase mortality to these levels.

I am very open to different views about the famine death rate due to the climatic effects of a large nuclear war. My 95th percentile is 702 times my 5th percentile.

That is true, but if you had significant probability mass on the scenarios where people react very suboptimally, then your mean mortality would be a lot higher.

Vasco Grilo @ 2023-10-16T09:43 (+2)

Half of the impact of the total loss of international food trade would cause 2.6% to die according to Xia 2022. So why is it not 4.43%+2.6% = 7.0% mortality?

In my BOTEC with "arguably more reasonable assumptions", I am assuming just a 50 % reduction in international food trade, not 100 %.

I see how you avoid the negative death rate by not considering 5 Tg. However, this does not address the issue that your comparison is not fair, which is exposed by the fact that if you did put in 5 Tg, you would get negative death rate.

My famine deaths due to the climatic effects are a piecewise linear function which is null up to a soot injection into the stratosphere of 11.3 Tg. So, if one inputs 5 Tg into the function, the output is 0 famine deaths due to the climatic effects, not negative deaths. One gets negative deaths inputting 5 Tg into the pieces of the function respecting higher levels of soot because after a certain point (namely when everyone is fed), more food does not decrease famine deaths. My assumptions of no household food waste and feeding all livestock grain to humans would not make sense for low levels of soot, as I guess roughly everyone would be fed even without going all in these mitigation measures in those cases. In any case, I agree I am underestimating famine deaths due to the climatic effects for 5 Tg. My piecewise linear function is an approximation of a logistic function, which is always positive.

One logically consistent way of doing it would be taking the difference between the blue and dark red lines, because they are comparable scenarios. I agree that no reduction in waste or food fed to animals is too pessimistic, but maybe you could do sensitivity on the scenario? Because even though I think that particular scenario is unlikely, I do think that cascading risks including loss of much of nonfood trade could very well increase mortality to these levels.

I am happy to describe what happens in a very worst case scenario, involving no adaptations, and no international food trade. Eyeballing the bottom line of Figure 5b, the famine death rate due to the climatic effects for my 22.1 Tg would be around 25 %. In this case, the probability of 50 % famine deaths due to the climatic effects of nuclear war before 2050 would be 0.614 %, i.e. 1.87 k (= 0.00614/(3.29*10^(-6))) times as likely as my best guess.

I must note that, under the above assumptions, activities related to resilient food solutions would have cost-effectiveness 0, as one would be assuming no adaptations. In general, I do not think it is obvious whether the cost-effectiveness of decreasing famine deaths due to the climatic effects at the margin increases/decreases with mortality. The cost-effectiveness of saving lives is negligible for negligible mortality and sufficiently high mortality, and my model assumes cost-effectiveness increases linearly with mortality, but I wonder what is the death rate for which cost-effectiveness is maximum.

In my 1st reply, I said "AI and bio catastrophes would also have cascade effects". Relatedly, how society reacts affects all types of catastrophes, not just nuclear winter. So, if one expects interventions decreasing famine deaths in a nuclear winter to be more cost-effective due to the possibility of society reacting badly, one should also expect interventions mitigating the risks of AI and bio catastrophes to be more cost-effective.

That is true, but if you had significant probability mass on the scenarios where people react very suboptimally, then your mean mortality would be a lot higher.

I would say we have strong evidence that animal consumption would decrease in a nuclear winter because prices would go up, and meat is much more expensive that grain. More broadly, as I said in the post:

It is quite easy for an apparently reasonable distribution to have a nonsensical right tail which drives the expected value upwards.

Denkenberger @ 2023-10-17T02:00 (+9)

>Half of the impact of the total loss of international food trade would cause 2.6% to >die according to Xia 2022. So why is it not 4.43%+2.6% = 7.0% mortality?

In my BOTEC with "arguably more reasonable assumptions", I am assuming just a 50 % reduction in international food trade, not 100 %.

That's why I only attributed half of the impact of total loss of international food trade. If I attributed all the impact, it would have been 4.43%+5.2% = 9.6% mortality. I don't see how you are getting 5.67% mortality.

My famine deaths due to the climatic effects are a piecewise linear function which is null up to a soot injection into the stratosphere of 11.3 Tg. So, if one inputs 5 Tg into the function, the output is 0 famine deaths due to the climatic effects, not negative deaths.

My understanding is that you chose this piecewise linear function to be null at 11.3 Tg because that's where the blue and gray dotted lines crossed, meaning that it appeared that the climate impacts did not kill anyone below 11.3 Tg. But what I'm arguing is that those two lines had different assumptions about feeding food to animals and waste, so the conclusion is not correct that there was no climate mortality below 11.3 Tg. And this is supported by the fact that there are currently under nutrition deaths, and any nonzero Tg is likely to increase those deaths.

I am happy to describe what happens in a very worst case scenario, involving no adaptations, and no international food trade. 

There are many ways that things could go worse than that scenario. As I have mentioned, there could be reductions in nonfood trade, such as fertilizers, pesticides, agricultural equipment, energy, etc. There could be further international conflict. There could be civil unrest in countries and a breakdown of the rule of law. If there is loss of cooperation outside of people known personally, it could mean a return to foraging, or ~99.9% mortality if we returned to the last time we were all hunter-gatherers. But it could be worse than this given the people initially would not be very good foragers, the climate would be worse, and we could cause a lot of extinctions during the collapse. The very worst case scenario is if there is insufficient food, if it were divided equally, everyone would starve to death.

I must note that, under the above assumptions, activities related to resilient food solutions would have cost-effectiveness 0, as one would be assuming no adaptations. 

I certainly agree that there would be some reduction in human edible food fed to animals and food waste before there will be large-scale deployment of resilient foods. But what I'm arguing is that the baseline expected mortality without significant preparation on resilient foods could be 25% because of a combination of factors listed above. Furthermore, I think that preparation involving planning and piloting of resilient foods would make it less likely that we fall into some of the terrible situations above.

In general, I do not think it is obvious whether the cost-effectiveness of decreasing famine deaths due to the climatic effects at the margin increases/decreases with mortality. The cost-effectiveness of saving lives is negligible for negligible mortality and sufficiently high mortality, and my model assumes cost-effectiveness increases linearly with mortality, but I wonder what is the death rate for which cost-effectiveness is maximum.

As above, even if the baseline expectation were extinction, there could be high cost effectiveness of saving lives from resilient foods by shifting us away from that scenario, so I disagree with "The cost-effectiveness of saving lives is negligible for ... sufficiently high mortality."

Vasco Grilo @ 2023-10-17T09:59 (+2)

That's why I only attributed half of the impact of total loss of international food trade. If I attributed all the impact, it would have been 4.43%+5.2% = 9.6% mortality. I don't see how you are getting 5.67% mortality.

I was assuming 50 % reduction in international trade, and 50 % of that reduction being caused by climatic effects, so only 25 % (= 0.5^2) caused by climatic effects. I have changed "50 % of it" to "50 % of this loss" in my original reply to clarify.

My understanding is that you chose this piecewise linear function to be null at 11.3 Tg because that's where the blue and gray dotted lines crossed, meaning that it appeared that the climate impacts did not kill anyone below 11.3 Tg.

Yes, that is quite close to what I did. The lines you describe intersect at 10.5 Tg, but I used 11.3 Tg because I believe Xia 2022 overestimates the duration of the climatic effects.

But what I'm arguing is that those two lines had different assumptions about feeding food to animals and waste, so the conclusion is not correct that there was no climate mortality below 11.3 Tg. And this is supported by the fact that there are currently under nutrition deaths

I was guessing this does not matter much because I think the famine deaths for 0 Tg for the following cases are similar:

  • No international food trade, and current food production. This matches the blue line of Fig. 5b I used to adjust the top line to include international food trade, and corresponds to 5.2 % famine deaths.
  • No international food trade, all livestock grain fed to humans, and no household food waste. This is the case I should ideally have used to adjust the top line, and corresponds to less than 5.2 % famine deaths.

Since the 2nd case has less famine deaths, I am overestimating the effect of having international food trade, thus underestimating famine deaths. My guess for the effect being small stems from, in Fig. 5b, the cases for which there are climatic effects (5 redish lines, and 2 greyish lines) all seemingly converging as the soot injected into the stratosphere tends to 0 Tg:

Fig. 5

The convergence of the redish and greyish lines makes intuitive sense to me. If it was possible now to, without involving international food trade, decrease famine deaths by feeding livestock grain to humans or decreasing household food waste, I guess these would have already been done. I assume countries would prefer less famine deaths over greater animal consumption or household food waste.

there are currently under nutrition deaths, and any nonzero Tg is likely to increase those deaths.

I guess famine deaths due to the climatic effects are described by a logistic function, which is a strictly increasing function, so I agree with the above. However, I guess the increase will be pretty small for low levels of soot.

There are many ways that things could go worse than that scenario. As I have mentioned, there could be reductions in nonfood trade, such as fertilizers, pesticides, agricultural equipment, energy, etc. There could be further international conflict. There could be civil unrest in countries and a breakdown of the rule of law. If there is loss of cooperation outside of people known personally, it could mean a return to foraging, or ~99.9% mortality if we returned to the last time we were all hunter-gatherers. But it could be worse than this given the people initially would not be very good foragers, the climate would be worse, and we could cause a lot of extinctions during the collapse. The very worst case scenario is if there is insufficient food, if it were divided equally, everyone would starve to death.

There are reasons pointing in the other direction too. In general, I think further more empirical investigation usually leads to lower risk estimates (cf. John Halstead's climate change and longtermism report). I am trying to update all the way now (relatedly), such that I do not (wrongly) expect risk to decrease (the rational thing is expecting best guesses to stay the same, although this is still compatible with higher than 50 % chance of the best guess decreasing).

As above, even if the baseline expectation were extinction, there could be high cost effectiveness of saving lives from resilient foods by shifting us away from that scenario, so I disagree with "The cost-effectiveness of saving lives is negligible for ... sufficiently high mortality."

I just meant the cost-effectiveness of saving lives tends to 0 as the expected population loss (accounting for preparation, response and resilience) tends to 100 %. An expected population loss of exactly 100 % means extinction with 100 % probability, in which case there is no room to save lives (nor to avoid extinction). Of course, this is a very extreme unrealistic case, but it illustrates cost-effectiveness will start decreasing at some point, so "I wonder what is the death rate for which cost-effectiveness is maximum". On way of thinking about it is that, although importance always increases with mortality, the decrease in tractability after a certain point is sufficient for cost-effectiveness to decrease too.

Denkenberger @ 2023-10-18T02:00 (+4)

I was assuming 50 % reduction in international trade, and 50 % of that reduction being caused by climatic effects, so only 25 % (= 0.5^2) caused by climatic effects. I have changed "50 % of it" to "50 % of this loss" in my original reply to clarify.

That makes sense. Thanks for putting the figure in! 

I guess famine deaths due to the climatic effects are described by a logistic function, which is a strictly increasing function, so I agree with the above. However, I guess the increase will be pretty small for low levels of soot.

If it were linear starting at 10.5 Tg and going to 22.1 Tg, versus linear starting at 0 Tg and going to 22 Tg, then I think the integral (impact) would be about four times as much. But I agree if you are going linear from 10.4 Tg versus logistic from 0 Tg, the difference would not be as large. But it still could be a factor of two or three, so I think it's good to run a sensitivity case.

Vasco Grilo @ 2023-10-18T13:06 (+2)

If it were linear starting at 10.5 Tg and going to 22.1 Tg, versus linear starting at 0 Tg and going to 22 Tg, then I think the integral (impact) would be about four times as much.

You are right about that integral, but I do think that is the relevant BOTEC. What we care about is the mean death rate (for a given input soot distribition), not its integral. For example, for a uniform soot distribution ranging from 0 to 37.4 Tg (= 2*18.7), whose mean matches mine of 18.7 Tg[1], the middle points of the linear parts would be:

  • If the linear part started at 10.5 Tg, 7.27 % (= ((10.5 + 37.4)/2 - 10.5)/(18.7 - 10.5)*0.0443).
  • If the linear part started at 0 Tg, 10.1 % (= ((0 + 37.4)/2 - 0)/(18.7 - 10.5)*0.0443).

So the mean death rates would be:

  • If the linear part started at 10.5 Tg, 5.23 % (= (10.5*0 + (37.4 - 10.5)*0.0727)/37.4).
  • If the linear part started at 0 Tg, 10.1 %.

This suggests famine deaths due to the climatic effects would be 1.93 (= 0.101/0.0523) times as large if the linear part started at 0 Tg.

Another way of running the BOTEC is considering an effective soot level, equal to the soot level minus the value at which the linear part starts. My effective soot level is 8.20 Tg (= 18.7 - 10.5), whereas it would be 18.7 Tg if the linear part started at 0 Tg, which suggests deaths would be 1.78 (= 18.7/10.5) times as large in the latter case. Using a logistic function instead of a linear one, I think the factor would be quite close to 1.

But I agree if you are going linear from 10.4 Tg versus logistic from 0 Tg, the difference would not be as large. But it still could be a factor of two or three, so I think it's good to run a sensitivity case.

The challenge here is that the logistic function f(x) = a + b/(1 + e^(-k(x - x_0))) has 4 parameters, but I only have 3 conditions, f(0) = 0, f(18.7) = 0.0443, f(+inf) = 1. I think this means I could define the 4th condition such that the logistic function stays near 0 until 10.5 Tg.

Ideally, I would define the logistic function for f(0) = 0 and f(+inf) = 1, but then finding its parameters fitting it to the 16, 27, 37, 47 and 150 Tg cases of Xia 2022 for international food trade, all livestock grain fed to humans, and no household food waste. Then I would use f(18.7) as the death rate. Even better, I would get a distribution for the soot, generate N samples (x_1, x_2, ..., and x_N), and then use (f(x_1) + f(x_2) + ... + f(x_N))/N as the death rate.

  1. ^

    18.7 Tg is the mean stratospheric soot until the end of year 2 corresponding to an initial injection of 22.1 Tg.

jackva @ 2023-10-14T13:13 (+11)

Thanks for doing this and kudos for publishing results that are in tension with your (occasional) employer.

Vasco Grilo @ 2023-10-15T15:11 (+6)

Hi Johannes,

I have the impression you are quite honest about (not overestimating) the risk from climate change, so thanks for that too!

Christopher Chan @ 2023-10-21T09:52 (+9)

I understand the desire to use cumulative probability to calculate probability of nuclear war before 2050, but if interdependency of base rate was not used (i.e. 0.0127 * 26 = 0.33, which is equivalent to metaculus), shouldn’t we already use a Beta conjugation of the base rate as each year pass-by?

- If detonation does not happen, Beta(1, 79)

- If detonation happen, Beta(2, 79)

- annual probability = 0.0127

- Cumulative probability of 21.843% by 2050


I saw you use Beta distribution for the CDF constraint the probability of a large nuclear war, defined using the metaculus question, I agree with this, I think this checks-out. I also like that you give less weighting to the metaculus question that ask for probability distribution as it will be less accurate than taking the Beta distribution of 100 to 1000, I learnt something about how to evaluate metaculus question here:
 

There seems to be 2 set of questions regarding nuclear impact and winter:

- The Nuclear Risk Horizon Project (no monetary incentive)

- Nuclear Risk Tournament ($ 2685.5 reward, and ends on 1st Feb 2024)
 

I wan to understand how do you calibrate the monetary incentive and limited time frame when weighing the 2 sets of questions for your research?
 

For example contrasting these 2 questions, which you have addressed in your post:
 

  1. How many nuclear weapons will be detonated offensively by 2050, if at least one offensive detonation occurs? [HORIZON, non-monetary]
  2. How many non-strategic nuclear weapons will be deployed at the end of 2023? (No recency weighted)? [TOURNAMENT, monetary]
     

The deployment mean is an orders of magnitude higher than predicted detonation. Surely, even 100 weapons is a very contained regional war scenario according to Hochmann et al (2021). And a very constrained exchange between Russia/China and NATO. I would think that the former question and prediction unrealistically low given how many test just NK have conducted recently. I think you have adequately modelled that with your beta-distribution, but that will be 3x higher than the latter question unweighted results which is about 112 at median, and 161 weapons at the 75th percentile (11 Tg soot), and the 95th percentile of your calculation of 1.81k is 3x the latter question’s distribution, do you think there’s a need to reconcile that?.
 

How do you feel about taking expected value of such numbers https://www.metaculus.com/questions/8382/1000-nuke-detonations-cause-4b-deaths/ (4 Bil * 0.45) when this seems so far lower than numbers proposed by more sophisticated modelling, esp the Rutgers Team. I am generally going on the heuristic on prediction market probably have an upper hand in counting weapons and predicting deployment and number and location of detonation, but not on long drag-out nuclear winter affects (crop yield, trade, famine numbers).

I still need time to engage with the soot calculation literature, so I will probably write a follow-up on that later next week or the week after if that’s okay, that will give me much more focus on asking the right questions and doing the right research.

Vasco Grilo @ 2023-10-21T10:18 (+2)

Thanks for looking into my post, Chris!

I understand the desire to use cumulative probability to calculate probability of nuclear war before 2050, but if interdependency of base rate was not used (i.e. 0.0127 * 26 = 0.33, which is equivalent to metaculus), shouldn’t we already use a Beta conjugation of the base rate as each year pass-by?

Good point! I wonder whether Metaculus' community is taking this into account while thinking the annual risk is higher than the base rate of 1.27 %, such that 33 % until 2050 still makes sense. If Metaculus' community is not taking the above into account, I should have ideally updated their probability downwards.

how do you calibrate the monetary incentive and limited time frame when weighing the 2 sets of questions for your research?

Interesting question! All else equal, I give more weight to questions which have a monetary incentive, and so would tend to rely on those from the Nuclear Risk Tournament over those from The Nuclear Risk Horizon Project. However, questions with monetary incentives are often part of tournaments with quite limited timeframes of a few years, which means extrapolating the results a few decades out may result in poor estimates.

For my case, I do not think I had available forecasts about the number of offensive nuclear detonations conditional on at least one before 2024 (or other close date). If I had, I would have to think about how much weight to give them. In the post, I compared this and this questions, but they are both part of the Nuclear Risk Horizon Project, and both have the same timeframe.

How many non-strategic nuclear weapons will be deployed at the end of 2023?

Note this question only refers to deployed nonstrategic weapons. The vast majority of deployed nuclear weapons is strategic. The US had 100 deployed nonstrategic nuclear weapons in 2023, but 1.67 k strategic.

How do you feel about taking expected value of such numbers https://www.metaculus.com/questions/8382/1000-nuke-detonations-cause-4b-deaths/ (4 Bil * 0.45) when this seems so far lower than numbers proposed by more sophisticated modelling, esp the Rutgers Team.

I estimated 392 M famine deaths due to the climatic effects conditional on at least 1.07 k offensive nuclear detonations, so Metaculus' community prediction of 45 % probability of over 4 billion deaths seems pessimistic. On the other hand, it may be the case that famine deaths due to the climatic effects are only a small fraction of the overall deaths (and Metaculus' prediction refers to the overall deaths). For example, maybe Metaculus' community is thinking that large nuclear wars would happen together with bio or AI great power war, thus predicting a higher death toll.

Xia 2022 presents various death rates in Fig. 5b for various levels of adaptation. Depending on how one thinks the response would go, one can deem the baseline estimates from the Rutgers' team too optimistic/pessimistic. In order to represent the response, I used their scenario for no international food trade, no household food waste, and all livestock grain fed to humans, adjusted to include international food trade without equitable distribution. However, there is huge uncertainty, so I considered famine deaths can vary by a factor of 100 even for fixed soot injected into the stratosphere.

I am generally going on the heuristic on prediction market probably have an upper hand in counting weapons and predicting deployment and number and location of detonation, but not on long drag-out nuclear winter affects (crop yield, trade, famine numbers).

I think prediction markets often add value by estimating the likelihood of various scenarios. For example, Xia 2022 presents results for:

  • Various nuclear wars (from 5 Tg to 150 Tg), but does not mention the likelihood of each one of them. 
  • The number of deaths conditional on various responses, but does not provide a best guess for the mortality corresponding to a best guess response.

I still need time to engage with the soot calculation literature, so I will probably write a follow-up on that later next week or the week after if that’s okay, that will give me much more focus on asking the right questions and doing the right research.

Sure!

Vasco Grilo @ 2023-12-02T22:56 (+7)

I think I may well have substantially overestimated the famine deaths due to the climatic effects. I had not noted Xia 2022's Fig. 5a provides data for what would happen in a nuclear winter with international food trade, and equitable distribution of food (lesson, read crucial papers more carefully!).

Fig. 5

In a 47 Tg scenario with a) equitable distribution of food, b) international food trade, c) all human edible livestock feed fed to humans, and d) no household food waste, calorie consumption in the worst year would be around 2 k kcal/person/day (top tick of the 3rd bar from the right), which is to say famine deaths due to the climatic effects would be negligible up to 47 Tg given a) to d). In my analysis, I assumed adaptation would be as good as b) to d), and that famine deaths due to the climatic effects would be negligible up to 10.5 Tg in Xia 2022's climate model. So I implicitly assumed non-equitable distribution of food would increase the threshold for significant starvation by 36.5 Tg (= 47 - 10.5) in Xia 2022's climate model. This seems like an overestimate of the negative effect of non-equitable distribution of food, so I believe I overestimated famine deaths due to the climatic effects.

Vasco Grilo @ 2024-01-10T22:53 (+4)

I investigated the relationship between the burned area and yield a little, but, as I said just above, I do not think it is that important whether the area scales with yield to the power of 2/3 or 1. Feel free to skip to the next section. In short, an exponent of:

  • 2/3 makes sense if the energy released by the detonation is uniformly distributed in a spherical region (centred at the detonation point). This is apparently the case for blast/pressure energy, so an exponent of 2/3 is appropriate for the blasted area.
  • 1 makes sense if the energy released by the detonation propagates outwards with negligible losses, like the Sun's energy radiating outwards into space. This is seemingly the case for thermal energy, so an exponent of 1 is appropriate for the burned area.

Thanks to a chat with Stan Pinsent, I have realised the maximum burned area, which will arguably be seeked in order to maximise damage, is indeed proportional to yield. Given a burst height , and a point P on the ground whose distance from the point on the ground directly below the explosion point is , the energy flux at point P along the direction from the explosion point to P is , where  is the yield, and  is the constant of proportionality. If the angle between the ground and the direction from the explosion point to P is , and the energy flux orthogonal to the ground at point P is . If the burned area has an energy flux orthogonal to the ground of at least , its radius will be . So the burned area will be , which tends to 0 as the burst height goes to 0 or infinity. The burst height  which maximises the burned area is such that, i.e. it is . Consequently, the maximum burned area is , which is proportional to yield.

However, I have noted Suh 2023, based on Richelson 1980, uses exponents of:

I have asked the author to share his thoughts on this comment.

Vasco Grilo @ 2023-10-30T12:54 (+4)

The more severe scenarios modelled in Xia 2022 are very unlikely given my soot injected into the stratosphere per countervalue detonation of 0.0491 Tg, which I got giving the same weight to the results I inferred for Reisner's and Toon's views on the soot injected into the stratosphere per countervalue yield. Assuming 22.1 % of offensive nuclear detonations are countervalue regardless of the total number of detonations, my probabilities for an injection of soot into the stratosphere at least as large as the reference values in Xia 2022, owing to a nuclear war before 2050, are as follows[1]. For at least:

For reference, I expect 342 offensive nuclear detonations given one before 2050, corresponding to 75.6 (= 342*0.221) countervalue nuclear detonations, and 3.71 Tg (= 75.6*0.0491). One may argue this is too small given the possibility of worst case scenarios, but my expected severity of the climatic effects of nuclear war is already driven by worst case scenarios. For my median 35.1 offensive nuclear detonations given one before 2050, corresponding to 7.76 (= 35.1*0.221) countervalue nuclear detonations, I would only expect 0.381 Tg (= 7.76*0.0491), i.e. 10.3 % (= 0.381/3.71) as much as the value I got for my expected detonations.

  1. ^

    Calculated here from 0.32*(1 - beta.cdf("offensive nuclear detonations"/(9.43*10**3), alpha, beta_)).

Vasco Grilo @ 2024-02-02T18:59 (+2)

One may argue the geometric mean is not adequate based on the following. If the soot injected into the stratosphere per countervalue yield I deduced from Reisner’s and Toon’s view respects the 5th and 95th percentile of a lognormal distribution, the geometric mean is the median of the distribution, but what matters is its mean. This would be 5.93*10^-4 Tg/kt, i.e. 2.28 (= 5.93*10^-4/(2.60*10^-4)) times my best guess. I did not follow this approach because:

  • It is quite easy for an apparently reasonable distribution to have a nonsensical right tail which drives the expected value upwards. For instance, setting the soot injected into the stratosphere per countervalue yield I deduced from Reisner’s and Toon’s view to the 25th and 75th percentile of a lognormal distribution, its mean would be 0.0350 Tg/kt, which is 16.3 (= 0.0350/0.00215) times the 0.00215 Tg/kt I deduced for Toon’s view, i.e. apparently too high.
  • I do not have a good sense of the quantiles corresponding to the results I calculated based on Reisner’s and Toon’s views.

I guess it is better to treat the results I inferred from Reisner’s and Toon’s view as random samples of a lognormal distribution, as opposed to matching them to specific quantiles. I used the geometric mean, which is the MLE of the median of a lognormal distribution[18].

In the post, I used the geometric mean to get the MLE of the mean of lognormal distributions, which I assumed for variables with 2 estimates differing a lot between them that did not range from 0 to 1. I have now realised the geometric mean is the MLE of the median (not mean) of a lognormal distribution, and corrected the text accordingly. However, I would ideally update the post using the MLE of the mean (not median) of lognormal distributions. If I did this, since, from the above, the soot injected into the stratosphere per countervalue yield would become around 2.28 times as large, and I think famine deaths due to the climatic effects are roughly proportional to it, I guess these would roughly double. On the other hand, I commented I may well have overestimated such deaths due to another reason. I guess accounting for the 2 opposing factors would lead to my best guess for the famine deaths due to the climatic effects becoming 1/3 to 3 times as large with 50 % probability.

Vasco Grilo🔸 @ 2024-07-14T12:24 (+2)

Actually, I think I did well using the geometric mean. So I no longer endorse the comment above, and may have overall underestimated deaths in my post.