Pentagon Clears 8 AI Firms for Classified Networks

The Pentagon signed AI agreements with eight tech companies for its most classified military networks, pointedly excluding Anthropic even as courts battle over its blacklist status.

Pentagon Clears 8 AI Firms for Classified Networks

On Friday, the U.S. Department of Defense announced formal agreements with eight technology companies to deploy artificial intelligence systems on its most restricted military networks - the ones that handle secret and above-classified data. The list reads like a roll call of American tech power: SpaceX, OpenAI, Google, Nvidia, Microsoft, Amazon Web Services, the Nvidia-backed startup Reflection AI, and Oracle. One name conspicuously absent: Anthropic.

"These agreements accelerate the transformation toward establishing the United States military as an AI-first fighting force and will strengthen our warfighters' ability to maintain decision superiority across all domains of warfare." - Pentagon press release, May 1, 2026

TL;DR

  • Pentagon signed AI deployment agreements with eight companies for Impact Level 6 and IL7 classified networks
  • Companies include OpenAI, Google, Microsoft, Nvidia, AWS, SpaceX, Oracle, and Reflection AI
  • Anthropic remains officially blacklisted despite a district court injunction that an appeals court later overturned in April
  • Pentagon CTO Emil Michael said the NSA's separate interest in Anthropic's Mythos model is "a different national security moment"
  • No contract values were disclosed; 1.3 million DOD personnel already use GenAI.mil for unclassified work

The Scope of the Agreements

The agreements cover the military's Impact Level 6 and Impact Level 7 network environments - the tiers that handle secret and top-secret data respectively. IL7, the higher classification, requires physical security controls, strict access auditing, and is normally reserved for the most sensitive operational intelligence. Getting AI onto those networks is not a routine procurement exercise; it requires extended security vetting and operational trust that most vendors never achieve.

Pentagon CTO Emil Michael, a former Uber executive appointed in May 2025, framed the deal explicitly around vendor diversification. "It's irresponsible to be reliant on any one partner," he told reporters. The language echoes the Pentagon's stated goal of building "architecture that prevents AI vendor lock and ensures long-term flexibility for the Joint Force."

Aerial view of the Pentagon in Arlington, Virginia The Pentagon in Arlington, Virginia, home to over 1.3 million DoD personnel who already access AI tools through the classified GenAI.mil platform. Source: commons.wikimedia.org

What the AI Will Actually Do

The Pentagon's stated use cases are narrow: streamline data synthesis, elevate situational understanding, and augment warfighter decision-making. Translation: intelligence analysts using LLMs to process surveillance feeds faster, officers getting AI-assisted briefings, and logistics systems that summarize operational data in real time.

Over 1.3 million DOD personnel already use GenAI.mil - a secure enterprise platform built on large language models - for unclassified tasks like document drafting, research summaries, and data analysis. The new agreements extend that AI-first posture into the classified domain, where the stakes and the secrecy requirements are categorically higher.

Google's Gemini 3.1 Pro was added to GenAI.mil in late April, giving it a head start on the classified rollout. The other approved vendors haven't yet disclosed which specific models will run on IL6/IL7 systems, or when.

The Notable Newcomer: Reflection AI

Seven of the eight companies on the list have household names in enterprise technology. The eighth - Reflection AI - does not. Founded in 2024 by former Google DeepMind researchers Misha Laskin and Ioannis Antonoglou, the New York-based startup is Nvidia-backed and valued at roughly $25 billion in recent fundraising discussions, despite not yet having shipped a publicly available product.

Reflection is positioning itself as a Western open-source counterweight to DeepSeek, building frontier models whose weights will be released publicly while keeping training pipelines proprietary. For the Pentagon, its appeal is probably straightforward: a domestic open-weight model that can be run entirely air-gapped on sovereign government hardware, with no dependency on a foreign API.

CompanyPrior DoD RelationshipClassification Level
OpenAIPrior agreements (since 2025)IL6 + IL7
GoogleGenAI.mil (Gemini 3.1 Pro, April 2026)IL6 + IL7
MicrosoftAzure Government Cloud contractsIL6 + IL7
AWSGovCloud, NSA, CIA infrastructureIL6 + IL7
NvidiaHardware supplierIL6 + IL7
SpaceXPrior agreements (since 2025)IL6 + IL7
Reflection AINone (new entrant)IL6 + IL7
OracleOracle Cloud Government contractsIL6 + IL7

The Anthropic Exclusion

The company that defined much of the Pentagon's AI ambitions over the past year is not on the list. Anthropic - whose Claude models were used extensively across federal agencies until Secretary Hegseth's dramatic February designation - remains officially classified as a supply chain risk.

The legal fight has been exhausting. Anthropic sued the Trump administration in March after the Pentagon blacklisted it for refusing to drop guardrails against autonomous weapons and domestic mass surveillance. A district court judge granted Anthropic a preliminary injunction in late March, calling the Pentagon's actions "Orwellian" in a 43-page ruling. But the appeals court sided with the government in April, reinstating the blacklist while the case continues.

Server racks in a classified data center environment Impact Level 6 and IL7 environments require physical security controls and strict access auditing beyond standard government cloud infrastructure. Source: commons.wikimedia.org

The Mythos Paradox

There's an awkward wrinkle. The NSA is using Anthropic's Mythos Preview - an unreleased model the company developed specifically for cybersecurity applications - despite the DOD blacklist. Mythos can identify and patch software vulnerabilities at scale, a capability that intelligence agencies have apparently decided they can't walk away from regardless of procurement policy.

Michael's response on Friday was careful: "The Mythos issue that's being dealt with government-wide, not just at the Department of War, is a separate national security moment where we have to make sure that our networks are hardened up, because that model has capabilities that are particular to finding cyber vulnerabilities and patching them."

In other words: Anthropic is a supply chain risk, except when it isn't. The government wants the company's most advanced model for national security purposes while simultaneously trying to cut it off commercially. That contradiction will be difficult to sustain in court.

Anthropic is officially a supply chain risk. The NSA is still using its most advanced model. That contradiction will be difficult to sustain in court.

It is also worth noting that Emil Michael sold his stake in xAI for $24 million shortly after landing his Pentagon role - a conflict of interest that critics have flagged given his department's simultaneous efforts to partner with Elon Musk's companies, including SpaceX.

What Happens Next

The near-term question is deployment speed. The Pentagon agreements do not include timelines, and IL7 environments notoriously require months of security certification work before any new software can operate on them. The symbolic value of Friday's announcement likely outpaces the operational reality by considerable distance.

On the Anthropic front, the litigation continues. The company is pursuing both the supply-chain-risk designation challenge and a separate track around the constitutional dimensions of the government's retaliation theory. The White House was reportedly drafting plans earlier this spring to permit limited federal Anthropic use, which suggests some officials believe the blacklist is politically unsustainable even if legally defensible.

The broader pattern here is one the technology industry knows well. When a government agency decides it needs a capability, procurement rules tend to bend eventually. The question is whether Anthropic's legal strategy can force that outcome faster than the Pentagon's eight new partners can build equivalent alternatives.


Sources:

Daniel Okafor
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.