Pentagon's use of Claude during Maduro raid sparks Anthropic feud

By Matrice Jacobine🔸🏳️‍⚧️ @ 2026-02-14T19:50 (+16)

This is a linkpost to https://www.axios.com/2026/02/13/anthropic-claude-maduro-raid-pentagon

The U.S. military used Anthropic's Claude AI model during the operation to capture Venezuela's Nicolás Maduro, two sources with knowledge of the situation told Axios.

The latest: After reports on the use of Claude in the raid, a senior administration official told Axios that the Pentagon would be reevaluating its partnership with Anthropic.

Why it matters: The episode highlights the tensions the major AI labs face, as they enter into business with the military while trying to maintain some limitations on how their tools are used.

Breaking it down: AI models can quickly process data in real-time, a capability prized by the Pentagon given the chaotic environments in which military operations take place.

Friction point: The Pentagon wants the AI giants to allow them to use their models in any scenario so long as they comply with the law.

What they're saying: "We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise," the Anthropic spokesperson said.

The big picture: Anthropic is one of several major model-makers that are working with the Pentagon in various capacities.

What to watch: Discussions are ongoing between the Pentagon and OpenAI, Google and xAI about allowing the use of their tools in classified systems. Anthropic and the Pentagon are also in discussions about potentially loosening the restrictions on Claude.

Editor's note: The headline and story were updated based on comments from a senior U.S. official.


huw @ 2026-02-14T23:50 (+2)

I’m sure there’s some good money in it but Anthropic signed this deal around 8 months ago, when they were making substantially less money. I’m just not sure it’s worth the fight when other frontier labs have comparably performant models and substantially less moral qualms—why risk the walkouts and resignations?

LT🔸 @ 2026-02-15T07:19 (+2)

I haven’t spent a lot of time thinking about this, but I suspect a couple reasons to continue pursuing this contract beyond the present revenue include (1) retaining relationships and a reputation that provides option value for (especially defense-related) future contracts and (2) increasing the likelihood that safer models are used in high-stakes settings, especially ones that could carry some non-negligible AI-related risks. While those are plausible (and plausibly right) lines of reasoning, I’m writing them without taking a stance on specific details that have central importance to their truth (e.g. are Anthropic’s models “safer” than competitors’? Seems quite likely based on reputation, but I’m not well-informed enough to make that claim confidently). If you’re right and the alternative is the use of another lab’s models for the same jobs, and if the article is right that Anthropic’s models are the only ones being used on classified networks, then I don’t think there are good reasons for Anthropic to intentionally cede that space to competitors.