Anthropic Sues Pentagon Over AI Safety Red Lines
Anthropic filed two federal lawsuits after the Pentagon labeled it a national security supply chain risk for refusing to drop AI guardrails on autonomous weapons and mass surveillance.

Anthropic filed two federal lawsuits on March 9 after the Trump administration branded the AI company a "supply chain risk to national security" - a designation historically reserved for Chinese state-linked firms like Huawei - for refusing to allow its Claude model to be used for fully autonomous weapons and mass domestic surveillance of American citizens.
TL;DR
- Pentagon designated Anthropic a supply chain risk under 10 U.S.C. § 3252 on February 27, invoking a statute previously used only against foreign adversaries
- Anthropic refused to remove two contractual restrictions: no autonomous lethal weapons, no mass surveillance of US citizens
- Trump ordered all federal agencies to stop using Anthropic products; defense contractors including Lockheed Martin began cutting ties
- OpenAI announced a Pentagon deal the same day Anthropic was blacklisted; 30+ employees from OpenAI and Google DeepMind filed an amicus brief backing Anthropic
- Legal analysts at Lawfare say the designation won't survive court scrutiny
"We're not going to move on those red lines."
- Dario Amodei, Anthropic CEO, February 26, 2026
That statement cost Anthropic its government contracts, triggered a presidential directive against the company, and put it in federal court against nearly three dozen government defendants. The question now is whether courts agree with Lawfare's assessment that the administration's legal position is "close to untenable."
What Happened
The dispute has been building since Hegseth's January 2026 AI strategy memorandum directed all Defense Department AI contracts to adopt standard "any lawful use" language. Anthropic's existing Pentagon contract included two explicit restrictions - restrictions the company had insisted on from the start.
The Two Red Lines
Anthropic drew its lines in two areas:
- No fully autonomous weapons - Claude cannot direct lethal autonomous systems without human oversight. The company argued that current AI models aren't reliable enough for life-or-death decisions without a human in the loop.
- No mass domestic surveillance - Claude cannot be used to conduct bulk surveillance of American citizens.
Amodei stated publicly that these restrictions "have not affected a single government mission to date." Despite the guardrails, Anthropic was the first frontier AI lab cleared for use on classified US networks, and Claude was reportedly used in intelligence analysis for ongoing military operations - including operations in Iran.
The Ultimatum
On February 24, Defense Secretary Pete Hegseth met with Amodei in what both sides described as a cordial meeting. Hegseth delivered a hard deadline: comply by 5:01 p.m. Friday, February 27, or face consequences. Pentagon officials reportedly told CNN that non-compliance would trigger a Defense Production Act compulsion and a supply chain designation.
Two days later, Anthropic refused publicly. Amodei said the company "cannot in good conscience accede."
The deadline expired. Within hours, Trump directed all federal agencies to stop using Anthropic's products, and Hegseth formally invoked Section 3252. He called Anthropic "sanctimonious" and its position "arrogance and betrayal." Trump labeled the company a "RADICAL LEFT, WOKE COMPANY" and threatened unspecified civil and criminal consequences.
The Pentagon in Arlington, Virginia - the US Department of Defense headquarters central to the dispute.
Source: upload.wikimedia.org
The Impact
| Stakeholder | Impact | Timeline |
|---|---|---|
| Anthropic | "Hundreds of millions" in near-term revenue at risk; contractors cutting ties | Immediate |
| Defense contractors (Lockheed, etc.) | Required to phase out Anthropic products | 6-month phaseout |
| US military operations | Six-month continuation period despite ban | Through ~August 2026 |
| OpenAI | Pentagon deal signed same day; now primary AI provider | Immediate |
| Other AI labs | Precedent set for what the government can demand from AI vendors | Ongoing |
Companies
Anthropic's lawsuit cites "hundreds of millions of dollars" in near-term risk from canceled government and uncertain private contracts. The company projects $14 billion in annual revenue, with more than 500 customers paying at least $1 million annually. Most of that revenue comes from non-defense clients using Claude for coding and other commercial tasks, but the designation's chilling effect on enterprise customers is the bigger concern.
Lockheed Martin and other defense contractors began cutting ties after the designation. Hegseth announced a six-month phaseout - an internal contradiction that Anthropic's lawyers flagged right away: the government simultaneously declared Anthropic an acute security risk and permitted six months of continued use in active military operations.
Users
The Wall Street Journal reported that US military strikes in the Middle East used Anthropic's technology in the hours after Trump announced the ban. The Washington Post reported Claude was processing intelligence and targeting data for ongoing Iran operations as of early March. Hegseth had to walk back the "immediate" language and allow the phaseout exactly because the military had become operationally dependent on the product he just blacklisted.
Competitors
Hours after Anthropic's designation was announced, Sam Altman posted that OpenAI had signed its own Pentagon deal. Altman later admitted the timing "looked opportunistic and sloppy" and that the announcement was "definitely rushed." He said OpenAI was "genuinely trying to de-escalate things and avoid a much worse outcome."
OpenAI's contract was subsequently amended to add language barring "intentional" domestic surveillance of US persons and prohibiting "deliberate" use to direct autonomous weapons. Critics immediately noted the qualifiers. Brad Carson, a former Army general counsel, said: "They're trying to blind you with complicated legal terms... But the lawyers know what it means. And the lawyers know that this is no guardrail at all." He compared the language to NSA Director James Clapper's 2013 testimony that the agency didn't collect Americans' data "wittingly."
The full OpenAI contract hasn't been released publicly.
Anthropic CEO Dario Amodei, who publicly refused the Pentagon's ultimatum on February 26.
Source: upload.wikimedia.org
The Lawsuits
Anthropic filed simultaneously in two courts: the US District Court for the Northern District of California and the DC Circuit Court of Appeals. Named defendants include nearly three dozen entities - entire government agencies and their heads.
The legal arguments:
- First Amendment retaliation - "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech." Anthropic argues it was blacklisted for publicly stating a viewpoint on AI risk.
- Statutory overreach - 10 U.S.C. § 3252 targets foreign adversaries conducting covert supply chain infiltration. Its operative language - "sabotage," "maliciously introduce unwanted function," "subvert" - presumes hostile intent, not a domestic company disclosing contractual restrictions.
- Due process - No notice, no opportunity to respond before the designation.
- Pretext - Federal Acquisition Regulation § 9.402(b) requires exclusion "only in the public interest for the Government's protection and not for purposes of punishment." Hegseth's public statements about "sanctimonious" corporate behavior reveal the ideological motive.
Michael Endrias and Alan Rozenshtein published a detailed analysis on Lawfare concluding that "every layer of the government's position has serious problems, and any one of them could independently be fatal." They noted courts that previously reviewed DOD designations against Luokung Technology and Xiaomi found them arbitrary and capricious due to lack of notice and explanation. Those were Chinese companies. Anthropic is American.
Michael Sobolik of the Hudson Institute put it bluntly: "We're treating an American AI company worse than we're treating a Chinese Communist Party-controlled AI company" - noting the Pentagon has applied no similar label to DeepSeek.
Industry Reaction
More than 30 employees from OpenAI and Google DeepMind - all signing in a personal capacity, explicitly not representing their employers - filed an amicus brief in support of Anthropic's lawsuits on March 9.
Jeff Dean, Google DeepMind chief scientist and one of the most prominent signatories of the amicus brief backing Anthropic.
Source: upload.wikimedia.org
The brief's central argument: "Without public AI governance laws, contractual and technical restrictions developers impose on their systems are a critical safeguard against catastrophic misuse." It warned that allowing the designation to stand "will chill open deliberation in our field about the risks and benefits of today's AI systems."
The brief also noted the simpler path the government chose to ignore: the Pentagon could have "simply canceled the contract and purchased the services of another leading AI company" rather than deploying a national security statute against a domestic competitor.
The core stakes extend beyond Anthropic. If the administration prevails, any AI vendor with ethical use policies becomes vulnerable to the same treatment the moment it declines a government use case. As we've documented in our AI safety and alignment explainer, the debate over who controls AI guardrails - companies, regulators, or end users - has been building for years. The FAR pretext argument and the statutory scope problem are strong - courts have rejected similar overreach before. But federal courts move slowly, and Anthropic is burning cash while the litigation plays out.
What Happens Next
Two federal courts now have to rule on emergency relief. Anthropic will almost certainly seek a preliminary injunction to halt the designation's effects while litigation proceeds. The six-month phaseout period gives some breathing room, but the contractor relationships are harder to rebuild than they are to sever.
The Financial Times reported on March 4 that Anthropic had reopened talks with the Pentagon - suggesting neither side is fully committed to a court fight. A negotiated exit remains possible, and the legal pressure Anthropic has now applied may be the thing that makes the government more willing to find one.
What's clear is that the "any lawful use" clause fight isn't over. The Trump administration has separately proposed requiring it in all civilian AI procurement contracts, which would put every frontier AI lab with safety policies in the same position Anthropic just found itself in.
Sources:
- Anthropic sues Pentagon over supply chain risk designation - NPR
- OpenAI and Google DeepMind employees file amicus brief for Anthropic - TechCrunch
- Anthropic sues Trump administration over supply chain risk tag - Al Jazeera
- Anthropic says Pentagon declared it a national security risk - NBC News
- Pentagon-Anthropic feud over AI guardrails - CBS News
- Pentagon's Anthropic designation won't survive first contact with legal system - Lawfare
- Timeline of the Anthropic-Pentagon dispute - TechPolicy.Press
