News

Pentagon Threatens to Blacklist Anthropic Over Military AI Guardrails

Defense Secretary Hegseth is reportedly close to designating Anthropic a 'supply chain risk' after the company refused to allow Claude to be used for mass surveillance and autonomous weapons. A $200 million contract hangs in the balance.

Pentagon Threatens to Blacklist Anthropic Over Military AI Guardrails

A high-stakes confrontation between the U.S. Department of Defense and AI company Anthropic has erupted into public view. Defense Secretary Pete Hegseth is reportedly close to cutting all business ties with Anthropic and designating the company a "supply chain risk" - a classification normally reserved for foreign adversaries like Huawei - because Anthropic refuses to allow its Claude AI model to be used for mass surveillance of Americans or fully autonomous weapons systems.

The dispute puts a $200 million Pentagon contract at risk and raises a question that will define the AI industry for years: can an AI company maintain ethical red lines when the most powerful military on earth demands compliance?

The Venezuela Catalyst

The crisis traces back to the U.S. military's capture of Venezuelan President Nicolas Maduro in January. Reports from the Wall Street Journal and Axios revealed that Claude was used during the active operation, deployed through Anthropic's partnership with defense contractor Palantir Technologies. The AI reportedly assisted with targeting that helped coordinate strikes on multiple sites in Caracas.

When Anthropic learned its model had been used in the raid, the company asked the Pentagon directly whether Claude was involved. That inquiry, according to a senior administration official speaking to Axios, "caused real concerns across the Department of War." The implication was clear: Anthropic was not simply accepting its role as a defense tool without questions.

The "All Lawful Purposes" Demand

The confrontation goes beyond Venezuela. For months, the Pentagon has been pushing four leading AI labs - Anthropic, OpenAI, Google, and xAI - to agree that their models can be used for "all lawful purposes," including the most sensitive areas of weapons development, intelligence collection, and battlefield operations.

Three of the four companies have reportedly agreed. OpenAI, Google, and xAI have accepted the Pentagon's terms for use on unclassified systems, though none are yet deployed for classified work. A senior administration official told Axios the Pentagon is confident those three will eventually agree to the full "all lawful use" standard.

Anthropic is the holdout. The company insists on two non-negotiable red lines: no mass surveillance of Americans, and no fully autonomous weapons - systems that select and engage targets without human involvement. Anthropic's position is that existing constitutional protections against warrantless surveillance and the ethical case against removing humans from kill decisions are principles too important to waive, regardless of the contract value.

Supply Chain Risk: A Nuclear Option

The Pentagon's response has been to escalate. Hegseth is considering designating Anthropic a "supply chain risk," a classification that carries consequences far beyond losing a single government contract. Under such a designation, every company that does business with the U.S. military would need to certify that it does not use Claude in its own workflows.

That is a serious threat. Anthropic has said that eight of the top ten U.S. companies by revenue use Claude. A supply chain risk designation could force enterprise customers to choose between their Pentagon contracts and their Anthropic subscriptions.

A senior administration official put it bluntly: Anthropic will "pay a price for forcing our hand like this."

The Broader Stakes

The $200 million contract itself is a fraction of Anthropic's reported revenue. But the dispute is about something larger than money.

Anthropic CEO Dario Amodei has been vocal about the risks of AI-enabled authoritarianism. In a January 2026 essay titled "The Adolescence of Technology," he warned that autonomous weapons and mass surveillance could make a "mockery" of First and Fourth Amendment protections. The essay argued that AI capabilities are advancing faster than the legal and ethical frameworks meant to govern them.

The Pentagon's counter-argument is practical: restrictions create unworkable "gray areas" in battlefield conditions. When soldiers are under fire, they cannot pause to consult an acceptable use policy.

What Happens Next

The situation remains fluid. Anthropic has not publicly commented on whether it would modify its terms to preserve the Pentagon relationship. The company's board, which includes former government officials, is reportedly divided on how far to push back.

For the broader AI industry, the precedent matters enormously. If the U.S. government can force compliance by threatening commercial consequences, it sends a clear signal: ethical guardrails are optional luxuries, revocable when they become inconvenient. If Anthropic holds its ground, it establishes that AI companies can set boundaries even with the world's most powerful customer.

Either outcome will shape how every AI company negotiates with governments for years to come.


Sources:

About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.