Anthropic: "Statement from Dario Amodei on our discussions with the Department of War"

By Matrice Jacobine🔸🏳️‍⚧️ @ 2026-02-26T23:45 (+69)

This is a linkpost to https://www.anthropic.com/news/statement-department-of-war

I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

Anthropic has also acted to defend America’s lead in AI, even when it is against the company’s short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage.

Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:

To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date.

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.

We remain ready to continue our work to support the national security of the United States.


PabloAMC 🔸 @ 2026-02-27T07:17 (+13)

Worth noting that the mass surveillance friction point is only about domestic mass surveillance. Thus, does Anthropic believes mass surveillance of non-Americans is just fine?

Pablo @ 2026-02-27T22:03 (+2)

No matter what Anthropic does, it seems folks always find a way to say something negative about them.

PabloAMC 🔸 @ 2026-02-27T22:14 (+5)

I don’t know about other folks but I think this is my first criticism of them as long as I can remember, both online and offline. In general I think they have been fairly responsible with AI safety, or as responsible as I would expect a company to be. But even if I did criticise them a lot, I think it would still be a valid criticism. After all, as a non American I feel quite unease about this, even if they are arguably not the main actor. In any case, I think liberal democracies should oppose mass surveillance in general.

Pablo @ 2026-02-27T22:35 (+6)

Sorry, my comment wasn’t addressed to you in particular. It should probably have been a top-level comment; I posted it as a reply only because your comment was an example (among many) of the phenomenon I was describing. I also oppose mass surveillance, and it makes zero difference to me whether or not the people surveilled comprise the tiny fraction of the world population that happens to be American.

I just find it frustrating that the critical comments directed at Anthropic often fail to grapple with the complexity of the situation and the hard tradeoffs they face.

quinn @ 2026-02-27T20:22 (+7)

Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

weird that one of their "red lines" is a moral line in the sand based on convictions in political philosophy, while the other one is a "not wrong but early" thing about reliability. I'm reading this as Dario pretty clearly saying that when AIs are reliable enough to have human-out-of-the-loop killchains, Anthropic will be happy to power it. 

And I'm worried this is a nuance that not all Anthropic employees or https://notdivided.org/ signers are noticing and disagree with. 

Jens Nordmark @ 2026-02-27T20:40 (+3)

Well, at some point AI was supposed to be our Leviathan. The reason this turned so weird is that the US is now an autocracy clearly opposed to the concept of a liberal democratic world-state, which has been the unspoken goal of everything since WW2.

Matrice Jacobine🔸🏳️‍⚧️ @ 2026-02-27T22:35 (+4)

Fully autonomous weapons seems to me to be a clear-cut case of differential acceleration in any case: not giving any kind of legitimate battlefield advantage for law-abiding democratic countries (human reflexes are top of the sigmoid; this is one of our main evolutionarily-selected skills for obvious reasons), but allowing authoritarians to establish a military dictatorship with minimal staff (historically "the army is ultimately made up of ordinary people who can refuse to shoot their brethren and/or shoot the dictator instead" have been an important pressure valve), or to organize genocidal massacres with automated recognition of targeted civilians (i.e. the FLI Slaughterbots scenario).

derek445 @ 2026-02-28T10:31 (+1)

This reads like the classic tension between capability and control. Governments want maximum flexibility in a crisis, while builders worry about reliability, misuse, and long term consequences if the tech is pushed beyond what it can safely do. The hard part is that both sides are right in different ways. Advanced tools can help with analysis and planning today, but once you remove human judgment entirely or scale them into domestic monitoring, the risks change faster than policy can adapt. The real issue is not access to the technology, it is defining clear boundaries before the technology outpaces the rules meant to govern it.