For the first time in American history, the Pentagon has labeled a US technology company a “supply chain risk.” The target? Anthropic — the AI safety lab behind Claude, one of the world’s most popular AI assistants.
The designation, announced Thursday and effective immediately, could bar Anthropic from all US government business. It’s a classification typically reserved for foreign adversaries like Huawei and ZTE. Now it’s being used against a San Francisco startup because it refused to let the military use its AI without guardrails.
What Happened
The confrontation has been building for weeks. As the US prepared for military strikes on Iran, the Pentagon demanded unfettered access to Anthropic’s Claude AI systems. Anthropic pushed back, seeking guarantees that its technology wouldn’t be used for domestic mass surveillance or fully autonomous weapons without human oversight.
The Pentagon didn’t appreciate the conditions.
On Friday, Trump posted on Truth Social directing all federal agencies to cease using Anthropic: “We don’t need it, we don’t want it, and will not do business with them again!” He added that he would “never allow a radical left, woke company to dictate how our great military fights and wins wars.”
Defense Secretary Hegseth followed up on X, announcing the supply chain risk designation would be “immediate” — prohibiting any company working with the military from conducting commercial activity with Anthropic.
On Thursday evening, Anthropic CEO Dario Amodei confirmed the company had received a formal letter of designation. His response was blunt: “We do not believe this action is legally sound, and we see no choice but to challenge it in court.”
OpenAI Steps Into the Void
Hours after Trump’s Truth Social post, OpenAI CEO Sam Altman announced a new Pentagon contract. The key difference: OpenAI permits “all lawful uses” of its tools, without specifying ethical boundaries.
Altman claimed the deal has “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.” But critics note that “lawful uses” is a very broad category when the government defines what’s lawful.
The competitive dynamic is unmistakable. Anthropic maintained safety principles and lost the contract. OpenAI dropped the restrictions and won it.
The Industry Reacts
Microsoft, which has deep partnerships with both OpenAI and Anthropic, attempted to thread the needle. The company said it would continue embedding Anthropic technology in its products — except for US Department of Defense clients.
“Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers,” Microsoft told the BBC.
Senator Kirsten Gillibrand called the designation “shortsighted, self-destructive, and a gift to our adversaries.” She added: “The government openly attacking an American company for refusing to compromise its own safety measures is something we expect from China, not the United States.”
Why This Matters Beyond AI
This isn’t just an AI story. It’s a precedent-setting moment for every technology company that builds critical infrastructure.
Safety as competitive disadvantage. The message to every AI lab is clear: maintain ethical guardrails and risk losing government contracts. Drop them and you’re rewarded. This inverts the entire premise of responsible AI development.
Supply chain weaponization. The “supply chain risk” designation was designed to protect national security from foreign threats. Using it against a domestic company for refusing to remove safety features is a novel — and critics say dangerous — expansion of executive power.
Concentration of military AI. With Anthropic frozen out, the Pentagon’s most advanced AI capabilities now flow through a single vendor: OpenAI. That’s not a supply chain improvement. That’s a single point of failure.
The chilling effect. Every AI researcher, every startup founder, every engineer working on safety is watching. If the most well-funded AI safety lab in the world can be designated a national security risk for maintaining guardrails, what happens to smaller companies that try the same?
The Bigger Picture
Just weeks ago, the US was pushing international AI safety frameworks. Now it’s punishing the company most associated with AI safety for actually practicing it.
The irony hasn’t been lost on observers. Anthropic was the first advanced AI company to have its tools deployed in government agencies doing classified work. It wasn’t anti-government. It was pro-guardrails.
Meanwhile, Claude remains the most downloaded AI app in several countries, with more than a million new users signing up every day. The commercial market doesn’t seem bothered by the Pentagon’s designation.
But the precedent is set. In the emerging AI arms race, the government has made its position clear: compliance beats safety, every time.
Anthropic’s legal challenge is expected in the coming weeks. The outcome could define the relationship between AI companies and the state for a generation.
Sources: BBC, New York Times, Bloomberg, NPR
