There is a painful irony at the center of this week’s AI drama: Anthropic was punished for holding the same ethical principles that OpenAI is now being praised for insisting upon. The difference, apparently, was in the willingness to close the deal regardless.
Anthropic had negotiated for months with the Pentagon, seeking a formal agreement that would allow its Claude AI to support military operations while excluding two categories of use — autonomous weapons and mass surveillance. These conditions were not novel; they reflected widely shared norms in the responsible AI community. The Pentagon rejected them anyway.
When Anthropic refused to back down, President Trump took to Truth Social to announce a sweeping ban on all government use of Anthropic technology. He framed the company’s ethical commitments as a political attack on the military, using language that cast any AI ethics policy as inherently suspicious.
OpenAI’s Sam Altman announced a Pentagon deal that same night, insisting his company had secured written commitments on the same two ethical points. He issued a memo to employees calling mass surveillance and autonomous weapons OpenAI’s “main red lines” — an echo of everything Anthropic had been arguing for months.
The AI workforce’s reaction was illuminating. Hundreds of employees across OpenAI and Google signed a solidarity letter with Anthropic before the deal was announced, warning the industry was being manipulated into competing against its own values. Anthropic said its stance was final and noted that its ethical restrictions had never, to its knowledge, interfered with a single government mission.
OpenAI Lands Defense Deal as Anthropic’s Principled Stand Costs It Government Business
32