News

Pentagon Summons Anthropic CEO, Threatens 'Supply Chain Risk' Designation Over Military AI Limits

Defense Secretary Hegseth gives Anthropic CEO Dario Amodei an ultimatum: lift Claude's military restrictions or face blacklisting from the entire US defense supply chain.

Pentagon Summons Anthropic CEO, Threatens 'Supply Chain Risk' Designation Over Military AI Limits

Defense Secretary Pete Hegseth has summoned Anthropic CEO Dario Amodei to the Pentagon for a meeting on Tuesday morning that multiple sources describe as an ultimatum over the terms under which the US military can use Claude, Anthropic's frontier AI model.

"Anthropic knows this is not a get-to-know-you meeting. This is not a friendly meeting. This is a sh*t-or-get-off-the-pot meeting," a senior Defense official told Axios.

The stakes are straightforward: if Amodei does not agree to lift Claude's restrictions on military applications, Hegseth is prepared to designate Anthropic a "supply chain risk" - a label typically reserved for foreign adversaries like Huawei - that would effectively banish the company from the entire US defense ecosystem.

TL;DR

  • Defense Secretary Hegseth has summoned Anthropic CEO Dario Amodei to the Pentagon on Tuesday for what officials call an ultimatum over military use of Claude
  • The Pentagon wants all AI labs to allow "all lawful purposes" military use, including weapons development and intelligence collection
  • Anthropic refuses to lift two red lines: mass surveillance of Americans and fully autonomous weapons without human involvement
  • If talks fail, Hegseth may designate Anthropic a "supply chain risk," forcing every Pentagon contractor to certify they do not use Claude
  • Claude is currently the only frontier AI model on the military's classified networks, deployed through Palantir

The Core Dispute

The conflict has been building for months. In January, Hegseth's AI strategy demanded the elimination of "company-specific guardrails" on military AI use. The Pentagon is pushing four leading AI labs - Anthropic, OpenAI, Google, and xAI - to let the military use their tools for "all lawful purposes," including the most sensitive areas of weapons development, intelligence collection, and battlefield operations.

Three of the four are bending. xAI has reportedly agreed to "all lawful use" at any classification level. OpenAI and Google have shown flexibility on unclassified work and continue negotiating classified access terms. Anthropic is the holdout.

What Anthropic Will Not Do

Anthropic maintains two non-negotiable red lines: it will not allow Claude to be used for mass surveillance of Americans, and it will not permit fully autonomous weapons systems that operate without human involvement. The company has signaled willingness to loosen other restrictions, but these two remain firm.

What the Pentagon Wants

Pentagon Undersecretary of Defense for Research and Engineering Emil Michael has framed the dispute as a question of democratic legitimacy.

"What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed. That is not democratic. Congress writes bills, the president signs them, agencies write regulations, and people comply."

Michael urged Anthropic to "cross the Rubicon" on military applications, arguing that existing Congressional laws and DOD regulations already govern surveillance and autonomous weapons, making vendor-imposed restrictions unnecessary.

Impact Assessment

StakeholderImpactTimeline
AnthropicLoss of $200M defense contract; potential blacklisting from all Pentagon-adjacent businessWeeks
PalantirForced to cut ties with key AI partner if supply chain risk designation is imposedWeeks to months
Pentagon contractorsMust certify they do not use Claude in any workflows - a problem given Anthropic says 8 of 10 largest US companies use itMonths
Pentagon operationsLoses the only frontier AI model on classified networksImmediate
Other AI labsPressure to agree to unrestricted military terms or face similar treatmentOngoing

Companies

The $200 million defense contract is a small fraction of Anthropic's $14 billion in annual revenue. But the real threat is the supply chain risk designation. If imposed, every company that does business with the Pentagon would need to certify that it does not use Claude in its operations. Given Claude's penetration across enterprise America, that would be a logistical nightmare - and a serious commercial blow.

Palantir, which provides the secure cloud infrastructure connecting Claude to classified military networks, is caught in the middle. The defense contractor has stayed conspicuously quiet as tensions have escalated. A supply chain risk label on Anthropic would force Palantir to sever one of its most important AI partnerships.

For the other labs, the message is clear: play ball or face consequences. The Pentagon has turned the Anthropic standoff into a precedent-setting negotiation that will define the terms under which Silicon Valley builds for the military.

Users

The immediate irony is that the Pentagon's hardball tactics could leave it worse off. Claude is the only frontier AI model authorized on classified defense networks. If Anthropic walks or gets pushed out, the military loses its most capable tool for sensitive intelligence and defense work - with no ready replacement.

The broader AI safety implications are significant. Anthropic was founded specifically around the principle that frontier AI systems require careful guardrails. If the company that built its entire identity on responsible AI development cannot hold its own safety lines against government pressure, it raises questions about whether any lab can.

Competitors

The dynamics between the four labs tell a story. xAI, Elon Musk's startup, agreed to full military access without apparent friction - consistent with Musk's close relationship with the Trump administration. OpenAI and Google are navigating a middle path. Google previously pulled out of Project Maven in 2018 over military AI concerns, and Pentagon CTO Michael pointedly referenced that reversal, expressing hope that Anthropic would follow the same trajectory.

The Maduro Raid Trigger

The tensions reached a boiling point after Claude was reportedly used during the January 3 special operations raid that captured Venezuelan President Nicolas Maduro. The exact role Claude played remains classified, but an Anthropic official contacted a senior Palantir executive questioning whether the company's software had been used in the operation. The Palantir executive interpreted this as disapproval and reported it to the Pentagon, escalating what had been a simmering contract dispute into an open confrontation.

Anthropic has declined to confirm or deny Claude's involvement: "We cannot comment on whether Claude was used for any specific operation."

Pentagon spokesman Sean Parnell has been blunt about the military's expectations.

"Our nation requires that our partners be willing to help our warfighters win in any fight."

  • January 3, 2026 - US special operations raid captures Venezuelan President Maduro. Claude reportedly used in the operation through Palantir infrastructure.

  • January 16, 2026 - Hegseth criticizes AI models that "won't allow you to fight wars" while announcing Grok's addition to Pentagon AI providers.

  • February 15, 2026 - Axios reports Pentagon threatens to cut off Anthropic in AI safeguards dispute.

  • February 16, 2026 - Axios reports Pentagon threatens supply chain risk designation; Anthropic described as "close" to being cut off.

  • February 19, 2026 - Pentagon CTO Emil Michael urges Anthropic to "cross the Rubicon" on military use; calls company restrictions "not democratic."

  • February 21, 2026 - Fortune reports Trump team "livid" about Amodei's position.

  • February 23, 2026 - Hegseth summons Amodei to the Pentagon for what officials describe as an ultimatum meeting, scheduled for Tuesday.

What Happens Next

Tuesday's meeting will determine whether Anthropic and the Pentagon can find a compromise - or whether the relationship collapses entirely. Anthropic's position has been remarkably consistent: it will support national security applications but will not greenlight mass domestic surveillance or weapons that kill without a human in the loop. The Pentagon views those conditions as an unacceptable intrusion by a private company into matters of military authority.

The most likely outcome is a partial deal. Anthropic loosens some restrictions, the Pentagon gets expanded access on classified networks, and both sides paper over the remaining disagreements with ambiguous language that defers the hardest questions. That is how these things usually resolve in Washington.

But if talks collapse, the fallout goes far beyond one contract. A supply chain risk designation would send a signal to every technology company considering defense work: there is no room for ethical red lines in the new AI-industrial complex. The labs that built frontier models on promises of safety and responsibility would face a stark choice between their principles and their access to the world's largest customer.

Anthropic has said the conversations are "productive" and "in good faith." Pentagon officials say negotiations have shown no progress and are on the verge of breaking down. Someone is wrong.


The $200 million contract is a rounding error for a company worth $380 billion. The precedent is not.

Sources:

Pentagon Summons Anthropic CEO, Threatens 'Supply Chain Risk' Designation Over Military AI Limits
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.