Lessons from the Iraq War for AI policy
By Buck @ 2025-07-10T18:52 (+61)
I think the 2003 invasion of Iraq has some interesting lessons for the future of AI policy.
(Epistemic status: I’ve read a bit about this, talked to AIs about it, and talked to one natsec professional about it who agreed with my analysis (and suggested some ideas that I included here), but I’m not an expert.)
For context, the story is:
- Iraq was sort of a rogue state after invading Kuwait and then being repelled in 1990-91. After that, they violated the terms of the ceasefire, e.g. by ceasing to allow inspectors to verify that they weren't developing weapons of mass destruction (WMDs). (For context, they had previously developed biological and chemical weapons, and used chemical weapons in war against Iran and against various civilians and rebels). So the US was sanctioning and intermittently bombing them.
- After the war, it became clear that Iraq actually wasn’t producing WMDs. This obviously begs the question of why they rejected the inspections. I think it was a combination of them wanting strategic ambiguity to deter their regional enemies, and it being desirable and politically beneficial to reject inspections that they viewed as violations of their sovereignty.
- They allowed inspections again when it looked like they were about to be invaded, but the invasion happened anyway.
- After the war, it became clear that Iraq actually wasn’t producing WMDs. This obviously begs the question of why they rejected the inspections. I think it was a combination of them wanting strategic ambiguity to deter their regional enemies, and it being desirable and politically beneficial to reject inspections that they viewed as violations of their sovereignty.
- One faction of the Republican political establishment, centered around a think-tank called “Project for the New American Century” (PNAC), from 1997 was advocating (e.g. here) regime change in Iraq, because they thought Iraq posed a risk of using WMDs against America or American allies, and perhaps because they thought nation-building was good for American security.
- At the time, the track record of nation-building looked somewhat better, because the recent American experiences with it were Germany and Japan.
- When Bush ran for president in 1999, he criticized Clinton’s foreign interventions and said he opposed nation-building and advocated a “humble” foreign policy. His campaign emphasized domestic priorities.
- His staff had a mix of views on foreign policy.
- The PNAC/“neoconservative” faction, including Vice President Cheney and Secretary of Defense Rumsfeld, wanted to exploit American hegemony to promote democracy and topple regimes they disliked.
- Secretary of State Powell and the State department were cautious and opposed to unilateral action that might upset allies.
- National Security Advisor Rice, who is a USSR scholar, was mostly focused on great power competition with Russia and China.
- Many other senior staff were focused on domestic issues.
- Before 9/11, Bush wasn’t paying that much attention to foreign policy, and the admin didn’t really do much on it, and none of those factions were dominant. There’s no way they would have invaded Iraq.
- Then 9/11 happened.
- Bush and America were scared and vengeful. This was probably the biggest shock since World War 2. It seemed plausible that this was the beginning of a trend of massive terrorist attacks.
- Cheney in particular was terrified of the prospect of WMDs being used for much larger-scale acts of terrorism. (He seems like the kind of guy who worries a lot about novel extreme threats. For example, he’d previously been involved in a bunch of continuity-of-government exercises.) He was concerned by wargames indicating that millions would die if a terrorist attacked an American city with smallpox.
- Bush kept asking about whether there’s plausibly a connection between 9/11 and Iraq. There wasn’t.
- But Iraq seemed vaguely like a threat to American national security. And the people who had already wanted to invade Iraq now found it easier to argue for it, by noting that it’s dangerous to have a state sponsor of terrorism that’s known to make WMDs.
- Many people in the admin, and then in the public, believed Iraq had WMDs. This was false. The admin made this mistake because of some mix of confirmation bias and political pressures to advocate the invasion, and then non-admin people deferred too much to the admin (whose evidence was in substantial part classified). (And maybe it was partially overcorrection: the intelligence community had underestimated the state of the Iraqi WMD programs before the Gulf War, e.g. they thought Iraq was further away from developing nuclear weapons than it was, and obviously they were traumatized by their failure to pay more attention to the pre-9/11 evidence for an imminent al-Qaeda terrorist plot.)
- They invaded.
- (Not relevant for the main point here, but they quickly made egregious errors like firing most of the Iraqi army and government, without which the nation-building might have succeeded.)
Takeaways
A few parts of this story stood out to me as surprisingly relevant to how AI might go:
- A shocking event led to the dominance of a political faction that previously had just been one of several competing factions, because that faction’s basic vibe (that we should make use of American hegemony, and that rogue states are a threat to national security) was roughly supported by the event.
- The response was substantially driven by elite judgements rather than popular judgement. Invading Iraq wasn’t called for by the American population until the admin started advocating for it; it was just vaguely related, semi-justifiable, and popular among a particular set of elites.
- The response involved some generalization and some scope sensitivity. The admin was terrified of bigger attacks, especially ones using chemical weapons and biological weapons.
- This was notably completely absent in the reaction to covid or the Spanish flu. One person I spoke to discussed evidence that this is because humans have a very different reaction to disease than to other types of threats.
- The response was incompetently executed and had awful consequences.
- And this failure caused huge problems for people who supported it (like Hillary Clinton and the Republican establishment) and was a huge boon for its opponents (most famously Obama, also Bernie Sanders).
- I’m kind of confused by why these consequences didn’t hit home earlier. By the time of the 2004 presidential election, it was pretty clear that Iraq didn’t have WMDs. I would have thought that the Democratic nominee should have centered the messaging “Bush told us that secret intelligence indicated that Iraq had WMDs, and that because of this we needed to invade. But that seems to have been totally and predictably wrong and led us into a dumb war. Such an egregious error disqualifies you as President.” Kerry (the Democratic nominee) went much softer than that. This was probably partially because he had voted in support of the Iraq war, so he couldn’t be too harsh on the decision. (Matthew Yglesias has a good article that discusses the history of support for the Iraq war.)
So to spell out some possibilities this implies about AI:
- If there’s some non-existential AI catastrophe (even on the scale of 9/11), it might open a policy window to responses that seem extreme and that aren’t just direct obvious responses to the literal bad thing that occurred. E.g. maybe an extreme misuse event could empower people who are mostly worried about an intelligence explosion and AI takeover.
- Those factions might make bad policy decisions or execute terribly on them.
Davidmanheim @ 2025-07-11T05:01 (+15)
The key takeaway, which has been argued for by myself and others, should be to promote investment in clear plans for what post warning shot AI governance looks like. Unfortunately, despite the huge contingent value, there is very little good work on the topic.
LuisEUrtubey @ 2025-07-11T15:32 (+3)
In a scenario like that, also important to prevent something similar to what happened to the Future of Iraq plans from the State Department.
Peter @ 2025-07-11T12:57 (+3)
Do you have ideas about how we could get better plans?
Davidmanheim @ 2025-07-14T06:49 (+2)
Convince funders to invest in building those plans, to sketch out futures and treaties that could work rbustly to stop the coming likely nightmare of default AGI/ASI futures.
Peter @ 2025-07-14T18:59 (+3)
Would be curious to hear your thoughts on this as one strategy for eliciting better plans
Davidmanheim @ 2025-07-15T03:44 (+2)
Definitely seems reasonable, but it would ideally need to be done somewhere high prestige.
CB🔸 @ 2025-07-15T06:23 (+7)
I think this post is missing a huge factor : oil.
“Of course it’s about oil; we can’t really deny that,” said Gen. John Abizaid, former head of U.S. Central Command and Military Operations in Iraq, in 2007.
The Bush family had massive ties with the oil industry, including election funds, and there were apparently plans to invade Iraq before 9/11 according to former Treasury secretary (without a politically acceptable opportunity to do so, though).
More data in this article and the excellent book 'Oil, power and war', which shows that getting oil has always been a major factor in geopolitics over the last 100 years.
https://edition.cnn.com/2013/03/19/opinion/iraq-war-oil-juhasz
https://www.amazon.fr/Oil-Power-War-Dark-History/dp/1603589783
Oil is not like any other resource - without it, the world's economies, food system and armies would crash within a week. Access to energy and resources is likely to be a major factor in political decisions as well.
DylanRMatthews @ 2025-07-17T14:09 (+4)
Loved this piece, and I think this articulates something important that even big histories of the war I've read (and contemporary commentaries I remembered) missed
A shocking event led to the dominance of a political faction that previously had just been one of several competing factions, because that faction’s basic vibe (that we should make use of American hegemony, and that rogue states are a threat to national security) was roughly supported by the event.
There was a lot of incredulity about responding to 9/11 by going after Iraq given that almost no one claimed Iraq was behind the attack; at best you had sort of desperate allusions to loose linkages between Hussein and al-Qaeda affiliates, which turned out to all be false. But skeptics were probably responding overly literally and in terms of the actual chain of logic instead of "which factions in government are empowered by this development," the answer to which was the PNAC crowd that wanted an Iraq invasion.
Seems useful in thinking about moments like MechaHitler or the DeepSeek shock. In the latter, for instance, a lot of people made good arguments that DeepSeek being able to train an excellent model with less compute implies that compute is more valuable than we thought, and that export controls might be more important, but despite that, it probably on the margin empowered Jensen Huang and the laissez-faire faction by making Biden's policies look feckless.
ida @ 2025-07-21T11:23 (+1)
Loved this piece. People should take into account non-AI context far far more than they do to drive their conclusion on AI governance/safety.
SummaryBot @ 2025-07-10T20:12 (+1)
Executive summary: This exploratory post draws parallels between the 2003 Iraq War and future AI policy, suggesting that a shocking AI-related event—akin to 9/11—could empower preexisting elite factions with extreme views, potentially leading to poorly justified or harmful policy responses driven more by elite consensus than public demand.
Key points:
- Historical analogy: The Iraq War was not an inevitable response to 9/11, but resulted from a shift in elite power dynamics, particularly the rise of a faction that had long supported intervention in Iraq, catalyzed by a national crisis.
- Elite-driven decisions: Policy responses were largely shaped by elite beliefs and bureaucratic dynamics, with limited initial public pressure for war; classified intelligence and deference to authority played key roles in building public support.
- Emotional overgeneralization: The fear of WMD-related mass terror led to scope-insensitive overreactions, despite weak evidence linking Iraq to 9/11—highlighting how novel or extreme threats can distort judgment.
- Execution failures and consequences: The war’s disastrous rollout and false premises had long-term political fallout, especially for leaders who supported it, although accountability was delayed and muted.
- AI implications: A similarly non-existential AI crisis could catalyze overreactions or radical policy shifts by empowering factions with extreme views (e.g. focused on existential risk or AI takeover), even if the triggering event is only loosely related.
- Policy caution: The author implies that future AI governance should anticipate and guard against opportunistic or overbroad responses during crises, especially from elite groups with preexisting agendas.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.