Pentagon Sends Anthropic a 'Best and Final Offer' - Company Has Until Friday to Comply
The Pentagon sent Anthropic its final terms for unrestricted military AI use Wednesday night, with a 5:01 PM Friday deadline that could end in contract cancellation, supply chain blacklisting, or Defense Production Act invocation.

Pentagon officials sent Anthropic their "best and final offer" on Wednesday night for unrestricted military use of Claude, the company's frontier AI model. Anthropic has until 5:01 PM ET on Friday to accept or face consequences that could reshape its entire business.
"Anthropic has until 5:01pm Friday to get on board with the Department of War," a Pentagon official stated.
TL;DR
- The Pentagon sent Anthropic a "best and final offer" Wednesday night demanding unrestricted military use of Claude
- Deadline is Friday, February 27 at 5:01 PM ET - refusal triggers contract cancellation, supply chain blacklisting, or Defense Production Act invocation
- Anthropic is holding firm on two red lines: no mass surveillance of Americans, no fully autonomous lethal weapons
- xAI's Grok was approved for classified networks on February 23 after accepting the "all lawful use" standard without reservation
- The Pentagon has already contacted Boeing and Lockheed Martin to assess their Claude exposure - a concrete step toward blacklisting
What the Pentagon Is Demanding
The core demand hasn't changed since this dispute went public in mid-February: Anthropic must amend its terms of service and technical guardrails to allow the U.S. military to use Claude for "all lawful purposes" without any company-imposed restrictions.
Specifically, the Pentagon wants Anthropic to:
- Remove the prohibition on using Claude for mass domestic surveillance of Americans
- Remove the prohibition on using Claude for final targeting decisions in lethal military operations without human involvement
- Match the "all lawful use" standard that xAI, Google, and OpenAI have already accepted
The Pentagon's Argument
Undersecretary of Defense Emil Michael framed the issue as a sovereignty question. "What we're not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed," he said. Michael raised the scenario of an intercontinental ballistic missile launched at the United States, arguing that Anthropic's guardrails could block the military's response in a moment of crisis.
Defense Secretary Hegseth has been more blunt. At a SpaceX event in January he said: "Department of War AI will not be woke. It will work for us. We're building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge."
What Anthropic Offered
Anthropic is not refusing all military work. In December 2025 negotiations, the company agreed to allow Claude for missile defense and cyber defense purposes. An Anthropic spokesperson stated: "Every iteration of our proposed contract language would enable our models to support missile defense and similar uses."
The Pentagon rejected this partial concession. It wants unrestricted access, period.
What's at Stake
| Stakeholder | Impact | Timeline |
|---|---|---|
| Anthropic | $200M contract loss + potential supply chain blacklisting cutting off enterprise business | Friday 5:01 PM ET |
| Pentagon | Loses its only AI provider with classified network access (via Palantir); xAI approved but not yet integrated | Immediate |
| Defense contractors | Boeing and Lockheed already contacted to assess Claude exposure; may be forced to sever Anthropic ties | Weeks |
| Anthropic investors | Amazon and Google both hold defense contracts - blacklisting could force them to distance from Anthropic | Months |
| Other AI companies | Precedent set: either accept "all lawful use" or get locked out of government entirely | Permanent |
Companies
The supply chain risk designation the Pentagon is threatening is usually reserved for foreign adversaries like Huawei. Applying it to a leading American AI company would be unprecedented. In practice, it'd legally bar any federal contractor or agency from doing business with Anthropic - and since most major U.S. corporations hold significant defense contracts, this could effectively cut Anthropic off from the Western commercial ecosystem.
The Pentagon has already taken concrete steps. On February 25, officials asked Boeing and Lockheed Martin to assess their reliance on Claude. Lockheed Martin confirmed it was contacted about "a potential supply chain risk declaration." The procedural machinery is moving.
On the other side, xAI reached a deal on February 23 to use Grok on classified networks, accepting the "all lawful use" clause without reservation. This ended Anthropic's exclusive classified access and increased the competitive pressure. Google and OpenAI also accepted the standard, though both currently operate only on unclassified networks. Anthropic is the sole holdout among all four Pentagon AI contractors.
Users
The deeper question is who decides the rules for military AI. Senator Mark Warner called the situation "deeply disturbing," noting that "most Americans oppose unsupervised autonomous weapon systems and AI-facilitated surveillance." Senator Chris Coons said demanding "complete obedience" to surveil Americans or develop self-firing weapons is a "chilling concept far beyond the bounds" of what the DoD should be doing.
Lawfare's legal analysis found the Defense Production Act's applicability is ambiguous. The DPA can clearly force priority access to existing products, but forcing a company to remove safety guardrails enters contested legal territory. The allocation power has "barely been used since the Korean War." The analysts concluded that "the government doesn't need to win litigation - only change behavior through threat."
Competitors
David Sacks, the White House AI & Crypto Czar, publicly attacked Anthropic as representing "woke AI" and the "doomer industrial complex." He accused the company of pursuing "a sophisticated regulatory capture strategy based on fearmongering" and trying to "backdoor Woke AI and other AI regulations through Blue states like California."
This framing positions any AI safety commitment as a political act rather than an engineering decision - a signal to every AI company that maintaining ethical guardrails may carry political risk under this administration. As we explored in our open-source vs proprietary AI guide, the competitive dynamics between AI companies already create enormous pressure to loosen restrictions. Adding government coercion to that mix changes the calculus completely.
Behind the Scenes
The dispute traces back to January 3, 2026, and Operation Absolute Resolve - the U.S. special operations raid in Venezuela that captured President Maduro. Claude was used during the operation through Palantir's classified platform. During a routine post-operation check-in, an Anthropic official's comments about the raid to a Palantir executive reportedly alarmed Palantir, who reported back to the Pentagon. This triggered Anthropic's insistence on formal assurances about Claude's use from now on.
Internally, Anthropic is under pressure from multiple directions. On February 9, Mrinank Sharma, head of the Safeguards Research Team, resigned. In his letter he wrote: "Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most."
Then on February 25 - one day before the final offer - Anthropic quietly removed its previous commitment to halt development of AI models if they outpaced safety procedures, adopting a "nonbinding safety framework" instead. The company insists this change is unrelated to the Pentagon dispute. The timing invites skepticism.
The company also appointed Chris Liddell, a former Trump administration official, to its board on February 13 - a clear attempt to build political bridges that hasn't, so far, worked.
What Happens Next
Anthropic CEO Dario Amodei met Hegseth at the Pentagon on Tuesday. Anthropic described it as a "good-faith conversation." Sources familiar with the matter say Anthropic has "no plans to budge" on its two red lines: no mass surveillance of Americans, no fully autonomous lethal weapons.
If Anthropic holds firm past Friday, three escalation paths are available to the Pentagon:
- Contract termination - The $200 million deal dies. Anthropic loses the revenue but survives.
- Supply chain risk designation - The cascading blacklisting that could sever Anthropic from enterprise customers holding defense contracts. This is the existential threat.
- Defense Production Act invocation - Compel Anthropic to provide its technology regardless. Legally untested in this context, but the threat alone carries weight.
The $200 million contract is a small fraction of Anthropic's reported $14 billion in annual revenue. But the supply chain designation would ripple across its entire business. Amazon and Google - Anthropic's two largest investors - both hold major defense contracts. A blacklisting could force them to choose between their Pentagon relationships and their AI investment.
As we covered in our AI safety and alignment guide, the tension between capability and safety has always been the central question in AI development. What's new is that the U.S. government is now explicitly demanding that the answer be "capability, no exceptions."
The clock is running. By Friday evening, either Anthropic crosses its own red lines or the Pentagon begins the process of making an American AI company a pariah in its own country's defense ecosystem. Either outcome sets a precedent that every AI company in the world will have to reckon with.
Sources:
- Pentagon officials sent Anthropic best and final offer - CBS News
- Pentagon takes first step toward blacklisting Anthropic - Axios
- Anthropic offered Pentagon use of AI for missile defense - NBC News
- Hegseth issues ultimatum to 'woke AI' startup Anthropic - Fortune
- Anthropic won't budge as Pentagon escalates - TechCrunch
- Anthropic vs the Pentagon - Al Jazeera
- Palantir partnership at heart of Anthropic-Pentagon rift - Semafor
- Anthropic faces Friday deadline in Defense AI clash - CNBC
- What the DPA can and can't do to Anthropic - Lawfare
- Anthropic ditches core safety promise - CNN
