AI & Singularity
Anthropic Sues US Government Over “Supply Chain Risk” Designation
In an unprecedented move, the US Department of War (formerly the Department of Defense) has designated Anthropic a “supply chain risk to national security” ” — a classification typically reserved for foreign adversaries. Anthropic CEO Dario Amodei announced Thursday the company has “no choice but to challenge it in court.”
This marks the first time a US company has received this designation, effectively barring Anthropic from all military contracts.

The AI safety standoff has begun.
What Happened
On March 4, 2026, the Department of War notified Anthropic of its decision via letter. The designation stems from Anthropic’s refusal to remove safety guardrails that would have allowed its AI to be used for:
- Fully autonomous weapons systems
- Mass domestic surveillance
A day after Anthropic stated publicly it would not allow its technology for these purposes, President Trump posted on Truth Social calling Anthropic a “RADICAL LEFT, WOKE COMPANY” that made a “DISASTROUS MISTAKE” trying to “strong-arm” the government.
Why It Matters
This isn’t about all AI companies. OpenAI struck a deal with the Department of War that included explicit guardrails against:
- Autonomous weapons
- High-stakes automated decisions
- Mass domestic surveillance
- Intelligence agency use (NSA)
OpenAI’s statement noted: “As part of our deal here, we asked that the same terms be made available to all AI labs, and specifically that the government would try to resolve things with Anthropic.”
The Stakes
For AI safety: This is a pivotal moment. Anthropic chose principles over Pentagon contracts. The question now is whether AI companies can maintain safety commitments while working with governments ” — or whether those commitments become optional when the stakes get high enough.
For precedent: If Anthropic loses, every AI company in the US could face similar pressure. The government is effectively saying: remove your guardrails, or lose access to federal contracts.
For the singularity: This is about who controls AGI development. Military applications? Commercial interests? Or the companies building the technology?
What’s Next
The lawsuit will challenge whether the government can legally designate a US company as a supply chain risk ” — a designation Congress created for foreign threats, not domestic innovators.
The outcome could shape the future of AI development for decades.
Sources: The Register, NPR, Fortune, The Verge, Anthropic statement