News

Anthropic Rejects Pentagon's 'Final Offer' - Says It Won't Build Mass Surveillance or Autonomous Weapons

Anthropic CEO Dario Amodei published a public statement rejecting the Pentagon's final terms, saying the proposed compromise language 'would allow those safeguards to be disregarded at will.' The Friday 5:01 PM deadline still stands.

Anthropic Rejects Pentagon's 'Final Offer' - Says It Won't Build Mass Surveillance or Autonomous Weapons

Anthropic just went public with its answer. In a statement published Thursday evening, CEO Dario Amodei said the Pentagon's "best and final offer" - delivered Wednesday night - "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons." The proposed compromise language, Amodei wrote, "was paired with legalese that would allow those safeguards to be disregarded at will."

"We cannot in good conscience accede to their request."

  • Dario Amodei, CEO, Anthropic

The Friday 5:01 PM deadline set by Defense Secretary Pete Hegseth still stands. If Anthropic doesn't accept the Pentagon's terms by then, the company faces contract cancellation, a "supply chain risk" designation normally reserved for adversarial nations like China, and potential Defense Production Act invocation.

Key Facts

DetailValue
What happenedAnthropic publicly rejected Pentagon's final terms for unrestricted military AI use
WhoCEO Dario Amodei, published statement on Anthropic's website
Two red linesMass domestic surveillance and fully autonomous weapons
Deadline5:01 PM ET, Friday, February 28, 2026
Contract at risk$200 million DoD prototype contract
ThreatSupply chain risk designation + possible DPA compulsion
Company valuation$380 billion (post-$30B Series G)

What Anthropic Rejected

The Pentagon's "Compromise"

The Pentagon's final offer included three provisions it characterized as concessions:

  1. Written acknowledgment that federal laws restrict surveillance of Americans
  2. Acknowledgment of existing Pentagon policies on autonomous weapons
  3. An invitation for Anthropic to join a military AI ethics board

Anthropic's response: these aren't safeguards. Federal laws haven't caught up with AI capabilities - the government can already purchase Americans' movement and browsing data without warrants. Existing Pentagon policies on autonomous weapons are what Hegseth has been methodically dismantling. And an advisory board with no enforcement power is theater.

The real issue is what the Pentagon demanded with these acknowledgments: Claude available for "all lawful purposes" without any company-imposed restrictions. The same language xAI agreed to when it signed its deal on February 23 to deploy Grok in classified systems.

The Two Red Lines

Anthropic drew exactly two lines:

Mass domestic surveillance. Amodei's statement argues that AI can "assemble scattered personal data into comprehensive life profiles automatically and at massive scale." He warned that "a powerful AI examining billions of conversations could gauge public sentiment, detect disloyalty pockets, stamp them out before growth." Anthropic supports foreign intelligence applications but won't build tools for bulk surveillance of Americans.

Fully autonomous weapons. Anthropic accepts partially autonomous weapons as legitimate defense tools but contends that frontier AI systems "are simply not reliable enough to power fully autonomous weapons" that remove human decision-making from targeting. Amodei made the constitutional argument explicit: "The constitutional protections in our military structures depend on the idea that there are humans who would, we hope, disobey illegal orders" - something AI-powered autonomous weapons can't do. Anthropic offered to collaborate with the Pentagon on R&D to improve reliability for autonomous systems. The Pentagon declined.

The Escalation Path

What Happens if Anthropic Doesn't Comply

The Pentagon has three escalation options, each more severe than the last.

ThreatWhat It MeansPrecedent
Contract terminationLose $200M prototype contractStandard - companies lose contracts regularly
Supply chain risk designationEvery DoD contractor must certify they don't use ClaudeUsually reserved for Huawei-class adversaries
Defense Production ActGovernment compels Anthropic to provide Claude without restrictionsUnprecedented for AI - closest precedent is Apple-FBI (2015), where courts sided with Apple

The supply chain risk designation is the nuclear option for Anthropic's business. On Wednesday, the Pentagon requested exposure assessments from Boeing and Lockheed Martin regarding their use of Anthropic products - the first formal step toward blacklisting. If enacted, it wouldn't just kill the DoD contract. It'd force every defense contractor, and potentially their subcontractors, to certify they don't use Claude. For a company valued at $380 billion heading toward an IPO, that's an existential threat to its enterprise business.

Amazon has over $8 billion invested in Anthropic. Nvidia signed a $5 billion strategic partnership in November. A supply chain risk label forces both to choose between their Pentagon relationships and their Anthropic investments.

Where the Other Companies Stand

CompanyPositionTerms Accepted
AnthropicRefusedTwo red lines on surveillance and autonomous weapons
xAISigned deal Feb 23"All lawful purposes" - no restrictions
OpenAIAgreed"All lawful purposes" - removed military ban in early 2024
GoogleAgreed"All lawful purposes" - reversed post-Project Maven prohibitions

Anthropic is the last holdout. Every other frontier AI lab with Pentagon contracts has accepted "all lawful purposes" without company-imposed restrictions.

What To Watch

The Friday deadline is real. The Pentagon has already taken the first concrete step toward supply chain risk designation by contacting Boeing and Lockheed Martin. This isn't posturing. Hegseth has fired military judge advocates general, removed the Pentagon's Civilian Harm Mitigation office, and gutted the operational test and evaluation office. He's shown he follows through.

Congress is being pulled in. The Alliance for Secure AI, Common Cause, and Young Americans for Liberty jointly urged Congress to probe the dispute. Sen. Mark Warner called the Pentagon's approach an attempt to "bully a leading U.S. company." They're calling for Hegseth to testify and for the Pentagon to produce documents on classified AI use. Whether Congress actually acts before Friday is another question.

The RSP v3 timing matters. Anthropic released its updated Responsible Scaling Policy - removing the commitment to pause model training if safety couldn't be ensured - on the same day as the Hegseth meeting. The company insists these are unrelated. Critics see it as Anthropic softening its safety stance everywhere except the two Pentagon red lines. METR director Chris Painter warned Anthropic is entering "triage mode."

The legal question is genuinely unsettled. Lawfare's analysis of the Defense Production Act concluded "neither side's argument is a slam dunk." The Act's Title I compulsion power has never been used to force an AI company to remove safety restrictions. If the Pentagon invokes it, Anthropic would likely comply under protest and right away seek a temporary restraining order. The Apple-FBI precedent - where courts rejected compelling Apple to write custom software for the government - cuts in Anthropic's favor, but it's not a clean parallel.

Nvidia's Jensen Huang is playing both sides. He said the rift is "not the end of the world" - while sitting on a $5 billion Anthropic partnership. If the supply chain risk designation goes through, Nvidia faces the same Amazon problem: can it maintain its Pentagon business and its Anthropic investment simultaneously?


Anthropic is betting that the political cost of blacklisting a $380 billion American AI company over two specific safeguards - no mass domestic surveillance, no fully autonomous weapons - is higher than the political cost of letting it slide. The statement's closing line reads like an extended hand with a line in the sand: "It is the Department's prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider." Thirty-six hours to go.

Sources:

Anthropic Rejects Pentagon's 'Final Offer' - Says It Won't Build Mass Surveillance or Autonomous Weapons
About the author AI Infrastructure & Open Source Reporter

Sophie is a journalist and former systems engineer who covers AI infrastructure, open-source models, and the developer tooling ecosystem.