Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight

By Matrice Jacobine🔸🏳️‍⚧️ @ 2026-02-27T15:42 (+27)

This is a linkpost to https://www.axios.com/2026/02/27/altman-openai-anthropic-pentagon

OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons.

Why it matters: If other leading firms like Google follow suit, this could massively complicate the Pentagon's efforts to replace Anthropic's Claude, which was the first model integrated into the military's most sensitive work.

The flipside: Altman made clear he still wants to strike a deal with the Pentagon that would allow ChatGPT to be used for sensitive military contexts.

What he's saying: "[R]egardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance," Altman wrote Thursday evening in a memo obtained by Axios.

The intrigue: ChatGPT is already available in the military's unclassified systems, and talks to move it into the classified space have accelerated amid the Pentagon-Anthropic fight, sources tell Axios.

In his memo, Altman wrote that the military will need AI, and he hopes to "help de-escalate things."

Between the lines: OpenAI's ideas for enforcing its red lines include preserving the company's ability to continuously strengthen its security and monitoring systems as it learns from real-world deployments, a source familiar told Axios.

What to watch: Based on how Pentagon officials have described their position to Axios, those proposals could face the same resistance Anthropic encountered: too much private company influence over critical government work.

State of play: After Anthropic CEO Dario Amodei stood firm by his company's red lines, employees from OpenAI and Google signed onto a letter in solidarity on Thursday, pushing executives at their respective companies to resist "pressure" from the Pentagon.

The other side: Defense officials contend they have no intention of conducting mass surveillance or swiftly deploying autonomous weapons.

What to watch: "We have had some meetings to discuss this over the past couple of days, and will have more tomorrow with our safety teams before we decide what to do. We will also set up an all hands and office hours as soon as we can," Altman said, referring to those negotiations.


Dylan Richardson @ 2026-02-28T13:19 (+1)

I am still confused about what exactly Open AI is requiring here and how (or if) it diverges substantively from Anthropic's contract. Is this merely a symbolic victory for the DOW? Or is the language about "lawful use" allowing a back door somehow?

Matrice Jacobine🔸🏳️‍⚧️ @ 2026-02-28T13:43 (+3)

https://x.com/austinc3301/status/2027639210874966060

Dylan Richardson @ 2026-02-28T14:01 (+1)

I found this Peter Wildeford piece helpful. My rough understanding now is that it was (implicitly?) rejecting "lawful use", especially within classified contexts, that was the contentious bit all along. 
 

But I'm still uncertain about the extent these contracts can be renegotiated in the future, when capabilities evolve. As well as the extent that black-swan type future capabilities could be "lawfully" used secretly, under classification? And presumably the nature of classified uses will kept secret from Open AI as well?