Altman Calls Pentagon Deal 'Sloppy' After 1.5M Boycott

OpenAI's CEO admits the Pentagon deal was rushed and amends it with new surveillance protections - but legal experts say the fixes don't close the real loopholes.

Altman Calls Pentagon Deal 'Sloppy' After 1.5M Boycott

"We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." - Sam Altman, OpenAI CEO, March 3, 2026

What they said / What we found

  • What OpenAI said: The amended deal now explicitly prohibits domestic surveillance of U.S. persons, references the Fourth Amendment, and blocks the use of commercially bought personal data for tracking
  • What we found: The core "all lawful purposes" clause remains unchanged. Legal experts say the new language is "window dressing" that restates existing law without creating new enforceable restrictions
  • What's missing: The full contract is still not public. The amended text does not address Title 10 military operations that blur the line with intelligence work, and OpenAI is still participating in a Pentagon drone voice-control competition

The Claim

On Monday morning, Sam Altman posted a series of messages on X acknowledging that OpenAI's Friday-night Pentagon deal was poorly timed and poorly communicated. He said the company "shouldn't have rushed" the announcement and was now working with the Department of Defense to "make some additions in our agreement to make our principles very clear."

Hours later, Axios broke the details. The amended contract now includes two new clauses:

"Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals."

And a second provision:

"The Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."

Altman also stated that the NSA "will not be using GPT models." Pentagon spokesman Sean Parnell echoed that "the Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal)."

The message: we heard you, we fixed it, move on.

The scale of the backlash

The fix didn't come unprompted. Between Friday and Monday, OpenAI faced the most concentrated consumer revolt in AI history:

MetricNumberSource
ChatGPT uninstalls (day-over-day surge)+295%Appfigures
ChatGPT 1-star reviews (Saturday)+775%App Store data
QuitGPT boycott participants1.5M+ claimedquitgpt.org
Claude daily downloads vs ChatGPTSurpassed for first timeAppfigures
Claude App Store ranking#1 U.S. (from ~#131 in Jan)Apple App Store
OpenAI employees who signed pre-deal letter96Internal open letter
Google employees who signed solidarity letter300+TechCrunch

The QuitGPT movement, which started in early February over concerns about OpenAI's political ties - including president Greg Brockman's $25 million donation to MAGA Inc. - exploded after the Pentagon announcement. Celebrity endorsements from Mark Ruffalo and Katy Perry pushed it into mainstream visibility.

A smartphone home screen showing app icons - ChatGPT uninstalls surged 295% in the days after the Pentagon deal was announced ChatGPT uninstalls surged 295% day-over-day on Saturday, February 28, dwarfing the app's typical 9% fluctuation rate.

The Evidence

What the amendments actually say

The new language references the Fourth Amendment, the National Security Act of 1947, and FISA. On the surface, this looks like solid constitutional grounding. But here's the problem: every law cited already prohibits what the clause claims to prevent.

Samir Jain from the Center for Democracy & Technology identified the central gap: "Under U.S. law, it's lawful for government authorities to buy up commercially available information from data brokers" and use AI to analyze it massively. The amended language prohibits "deliberate tracking" - but bulk data analysis doesn't look like deliberate tracking to an AI classifier. You can upload massive spreadsheets of legally purchased data and ask GPT models to conduct all sorts of analyses without triggering any safety filter.

As we covered in our guide to AI safety and alignment, the gap between stated safety principles and their technical enforcement is one of the oldest problems in the field.

What "all lawful purposes" still means

The core clause - permitting use "for all lawful purposes" - wasn't amended. OpenAI researcher Leo Gao, who signed the internal letter opposing the deal, described the published contract as "all lawful use" followed by "window dressing" that isn't truly operative.

Former Pentagon official Brad Carson told Platformer that Gao's interpretation "appears correct."

This matters because the U.S. government has historically expanded definitions of "technically legal" to encompass sweeping surveillance programs. As a source told The Verge: "Every aspect of it boils down to: if it's technically legal, then the US military can use OpenAI's technology to carry it out."

The Title 10 blind spot

OpenAI's contract reportedly excludes Title 50 intelligence activities - covert operations run by the CIA and NSA. But experts note the boundary between Title 10 (military operations) and Title 50 work is "increasingly blurry." The Defense Intelligence Agency operates under Title 10 and could still use AI on commercial or unclassified datasets without violating the stated restrictions.

Legal expert Jessica Tillipman identified a second structural problem: if a safety classifier blocks a use the Pentagon wants, whose right prevails - OpenAI's claimed "full discretion" or the military's "all lawful use" provision? The contract language governing this relationship remains unpublished.

The drone question nobody answered

Altman's mea culpa didn't address a detail that Platformer reporter Casey Newton surfaced: OpenAI is actively participating in a Pentagon competition to develop voice-control technology for drone swarms.

Researcher Sarah Shoker noted the definitional problem: building "voice-to-digital tools in a kill-chain" may or may not constitute "helping build a weapon" depending on interpretation. OpenAI's stated red line against autonomous weapons doesn't clearly cover AI components used upstream in targeting systems.

A drone hovering in flight against a dark background - OpenAI is participating in Pentagon competitions to develop voice-control technology for drone swarms OpenAI is actively participating in a Pentagon competition to develop voice-control technology for drone swarms - a detail Altman's apology didn't address.

ClaimReality
"Deal now prohibits domestic surveillance"New language restates existing constitutional law without adding enforceable new restrictions
"NSA will not use GPT models"Title 50 exclusion exists, but Title 10 military intelligence agencies are not excluded
"We added Fourth Amendment protections"Fourth Amendment already applies to government action - citing it adds no new legal constraint
"Technical controls are more reliable than contract clauses"OpenAI hasn't disclosed what these controls are or how they'd detect surveillance-adjacent queries
"We share Anthropic's red lines"Anthropic demanded binding contractual restrictions; OpenAI accepted "all lawful use" with advisory safeguards

What They Left Out

The most revealing statement came not from the amendments but from Altman's defense of the deal's structure. "I think you should be terrified of a private company deciding on what is and isn't ethical in the most important areas," he wrote on X - a direct shot at Anthropic's approach of embedding hard safety boundaries into its contracts.

This framing inverts the narrative. Anthropic was designated a supply chain risk for insisting on contractual limits. OpenAI is now being praised for deferring to "democratic processes" - but the democratic process it defers to is a Pentagon procurement system operating behind classification walls.

The employee letter that 96 OpenAI staff signed before the deal was announced asked leadership to "continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight." The amended deal's answer to those employees is, effectively: trust the law that already exists.

In parallel, over 300 Google employees signed a solidarity letter backing Anthropic's position, and OpenAI research scientist Aidan McLaughlin posted publicly: "I personally don't think this deal was worth it," noting "overwhelming" internal discussion.

The consumer migration to Claude that began last weekend shows no sign of reversing. An in-person protest was planned at OpenAI's San Francisco headquarters for today, March 3. Anthropic, for its part, has filed suit challenging its supply chain risk designation.


Altman called his own deal sloppy. The amendments he offered cite laws that were already on the books. The core "all lawful purposes" clause remains. And the full contract is still classified. The backlash forced OpenAI to the table - but the question is whether what they put on the table actually changes anything, or whether it's a PR patch on a structural problem that 1.5 million people saw clearly before the CEO did.

Sources:

Altman Calls Pentagon Deal 'Sloppy' After 1.5M Boycott
About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.