Notes on nukes, IR, and AI from "Arsenals of Folly" (and other books)

By tlevin @ 2023-09-04T19:02 (+20)

Richard Rhodes’s The Making of the Atomic Bomb has gotten lots of attention in AI circles lately, and it is a great read. I get why the people developing AI find it especially interesting, since a lot of it is about doing science and engineering and thinking about the consequences, but from my perspective as someone working on AI governance, the most powerful stuff was in the final few chapters as the scientists and policymakers begin to grapple with the wild implications of this new weapon for global politics.

Rhodes’s (chronologically) first follow-up, Dark Sun: The Making of the Atomic Bomb, is even more densely useful for thinking about emerging technology governance, and I probably recommend it even more strongly than TMOTAB for governance-focused readers.

However, I didn’t really start taking notes during my audiobook-listening until I started my third Rhodes tome, Arsenals of Folly: The Making of the Nuclear Arms RaceAOF is probably less applicable to AI governance than its predecessors, since it mostly focuses on a time when nuclear weapons had been around for several decades rather than when they were a new and transformative technology, but it still had a bunch of interesting details, and I figured I’d spare some of you the trouble of finding them by posting my notes to the forum. (Unfortunately, since I audiobooked, I don’t have page numbers.)

I’ve also included a list of other cool finds from my nuclear/Cold War reading from the last few months in an appendix.

Gell-Mann caveat

With that disclaimer to take all of this with a grain of salt having been said:

Notes from Arsenals of Folly

Some of my takeaways

These are mostly fairly obvious but reinforced by AOF:

Appendix: takeaways/interesting finds from related books

Arsenals of Folly capped a months-long nuclear/Cold War nerdsnipe caused by TMOTAB and Oppenheimer, so figured I’d also include some discoveries from these other books in this post.

  1. ^

     Is this a term? It should be a term. Like, you notice that the author seems to have gotten something wrong, and you consciously increase your skepticism at the rest of their claims to avoid Gell-Mann Amnesia.

  2. ^

    E.g., this first google result for “historical rates of return” finds that risky assets like housing and equities generally average around 7%, and non-risky assets like bonds around 3%. Maybe the government can beat the market when it invests in public goods, but by ~15-20%?


Luke Eure @ 2023-09-10T11:09 (+6)

Thanks a lot for the great post! 

I've also been learning a lot lately about nuclear safety, deterrence, the cold war, etc. mostly inspired by the Oppenheimer movie. I've been looking for people to talk through these issues with.

If anybody reading this is looking to talk more about these kinds of issues DM me - I'd love to share what I've learned, see what other people have learned, and just talk about the fascinating history and ethics surrounding atomic weapons use.

SummaryBot @ 2023-09-04T20:47 (+1)

Executive summary: The author shares key insights from several books on nuclear weapons and Cold War history that are relevant for thinking about AI governance today.

Key points:

  1. Estimates of technology riskiness are vulnerable to political and economic pressures.
  2. Policy change requires leaders to deeply understand and prioritize an issue.
  3. Technology can change global politics in unpredictable ways.
  4. Empathy and understanding rival perspectives is critical in international relations.
  5. Leaders face domestic political constraints even if personally motivated.
  6. Outside demographic and economic analyses can sometimes outperform domain experts.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.