Federal Judge Halts Pentagon's Anthropic Blacklist
A federal judge blocked the Pentagon's Anthropic blacklist on March 26, ruling the government engaged in First Amendment retaliation by punishing the company for refusing to drop AI safety guardrails.

On March 26, U.S. District Judge Rita F. Lin granted Anthropic a preliminary injunction, blocking both the Pentagon's "supply chain risk" designation and President Trump's executive directive ordering every federal agency to stop using Claude. The 43-page ruling is one of the most pointed rebukes of an administration's use of national security powers against a domestic tech company in recent memory.
Key Findings
- Judge Lin found Anthropic is likely to succeed on three separate legal theories: First Amendment retaliation, violation of the Administrative Procedure Act, and Fifth Amendment due process
- The ruling blocks the Pentagon's supply-chain risk designation and Trump's agency-wide ban pending the full merits hearing
- A 7-day stay gives the government time to appeal to the Ninth Circuit before the injunction takes effect
- Case caption: Anthropic v. U.S. Department of War - the Pentagon was renamed under the Trump administration
What the Judge Found
Judge Lin's opinion doesn't pull punches. She traced the sequence of events from Anthropic's refusal to drop its AI safety guardrails, to the Pentagon's escalating pressure, to the supply-chain designation itself - and found the pattern pointed in one direction.
"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."
- Judge Rita F. Lin, N.D. California, March 26, 2026
The supply-chain risk statute at issue, 10 U.S.C. § 3252, was designed to block foreign adversaries - Chinese state-linked firms like Huawei are the template case. Lin found nothing in the statute's text or legislative history that contemplates using it against an American company for publicly criticizing a government contract position.
Three Legal Theories, All Likely to Succeed
The ruling found Anthropic had shown a "likelihood of success on the merits" on all three of its claims.
On the First Amendment, Lin concluded the government punished Anthropic specifically for going to the press - not for any genuine national security concern. DOD internal records showed the supply-chain label was applied because of Anthropic's "hostile manner through the press." The judge called this "classic illegal First Amendment retaliation."
On the Administrative Procedure Act, she found the designation was "likely both contrary to law and arbitrary and capricious." The government offered no coherent explanation for how Anthropic's refusal to remove ethical guardrails - restrictions on autonomous weapons and domestic mass surveillance - translated into a supply-chain threat.
On due process, Lin found Anthropic received no notice or opportunity to respond before the designation took effect. The company learned it had been branded a national security risk the same day the order hit, with no prior hearing.
"An Attempt to Cripple Anthropic"
At the March 24 preliminary hearing, two days before her ruling, Lin had already signaled her thinking.
"It looks like an attempt to cripple Anthropic. And specifically my concern is whether Anthropic is being punished for criticizing the government's contracting position in the press."
- Judge Rita F. Lin, March 24 hearing
The ruling confirmed that view. Lin wrote that the Pentagon's combined actions - designation plus government-wide ban plus contractor pressure - "would cripple Anthropic," and that this harm was "irreparable" in the legal sense because no damages award could fully compensate for the reputational and operational damage of being labeled a national security risk.
Defense Secretary Pete Hegseth invoked the supply-chain statute against Anthropic in February after the company refused to drop its AI safety guardrails.
Source: wikimedia.org
How the Dispute Got Here
The underlying conflict has been building since late February. Anthropic refused Pentagon demands that it drop two specific contractual restrictions: a prohibition on using Claude for fully autonomous lethal weapons decisions, and a prohibition on using it for mass domestic surveillance of American citizens.
Defense Secretary Pete Hegseth called the restrictions "sanctimonious" and argued they created a potential "kill switch" that could disable military AI operations in a crisis. When Anthropic went public with its position - publicly refusing to comply even after a five-day ultimatum - the retaliation was swift. Trump posted on Truth Social directing all agencies to "immediately cease" using Anthropic products. Hegseth invoked the supply-chain statute the same day.
The result was immediate: defense contractors including Lockheed Martin began cutting Claude out of their systems, and commercial enterprise customers started calling their Anthropic account managers to ask whether their contracts were at risk.
Feb. 24, 2026 - Hegseth delivers ultimatum: remove all Claude usage restrictions by 5:01 PM Feb. 27 or face consequences.
Feb. 26, 2026 - Anthropic publicly refuses, maintaining restrictions on autonomous weapons and domestic surveillance.
Feb. 27, 2026 - Trump orders all federal agencies to stop using Anthropic products. Hegseth formally labels Anthropic a supply-chain risk.
Mar. 9, 2026 - Anthropic files federal lawsuit seeking injunctive relief.
Mar. 17, 2026 - DOJ files its legal response opposing the injunction.
Mar. 24, 2026 - Judge Lin holds oral argument; calls the ban "troubling" and signals skepticism of the government's position.
Mar. 26, 2026 - Preliminary injunction granted. Seven-day stay issued to allow government appeal.
The Northern District of California federal courthouse complex in San Francisco, where Judge Rita F. Lin issued the 43-page injunction on March 26.
Source: wikimedia.org
The Coalition Behind Anthropic
The breadth of the amicus support was striking. Microsoft, Google, and researchers from OpenAI filed briefs backing Anthropic. The ACLU argued on First Amendment grounds. Retired military leaders submitted a brief contesting the claim that Anthropic's guardrails posed a genuine security threat. The American Federation of Government Employees - representing 800,000 federal workers - filed in support.
Former Trump AI advisor Dean Ball, whose statement was included in amicus filings before the court, put it plainly: "This is simply attempted corporate murder."
That line of support from across the political range made the government's position harder to defend. The DOJ's brief largely rested on national security deference - the argument that courts should not second-guess executive branch security determinations. Lin rejected that framing, finding the designation did not rest on a genuine security assessment but on a determination to punish public criticism.
Anthropic's Statement
Anthropic's response was measured.
"We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable technology."
- Anthropic, March 26, 2026
That framing - keep the door open with the government while winning in court - reflects Anthropic's position throughout the dispute. The company has consistently said it wants to work with defense and intelligence agencies, just not without ethical constraints on autonomous killing and mass surveillance.
What It Does Not Tell You
The preliminary injunction isn't a final ruling. Lin was clear that she was deciding whether Anthropic had demonstrated a "likelihood of success" on the merits - not that it had won. The 7-day stay means the government will almost certainly appeal to the Ninth Circuit before the injunction takes effect.
A Ninth Circuit panel could stay the preliminary injunction pending appeal, which would restore the ban while the case works through the court system. Given the current makeup of the federal judiciary and the Trump administration's pattern of appealing adverse rulings, that scenario is live.
The ruling also doesn't resolve the underlying question of whether the Pentagon can impose any conditions on Anthropic's participation in federal contracting - or whether Anthropic can impose usage restrictions that conflict with government demands. Those questions go to the merits, and Lin hasn't answered them.
The deeper issue this case surfaces is one the AI industry has largely avoided confronting directly: whether the government can use procurement power to override a company's own ethical commitments about how its technology is used. Anthropic's safety-focused approach has always been its stated competitive differentiator. This ruling, at minimum, means that differentiator can't be stripped by executive fiat - at least not through the supply-chain statute.
Sources: CNN - NPR - CBS News - WVAS FM / NPR - TechPolicy.Press
