OpenAI Staff Revolt Over Pentagon Deal as Users Flee

OpenAI employees are fuming about the company's Pentagon contract, ChatGPT uninstalls surged 295%, and 1-star reviews spiked 775% - while Claude downloads soared and hit #1 on the App Store.

OpenAI Staff Revolt Over Pentagon Deal as Users Flee

"Many employees 'really respect' Anthropic for standing up to the Pentagon and are frustrated with OpenAI's handling of their own contract."

  • Anonymous OpenAI employee, to CNN

OpenAI's Pentagon deal is triggering a revolt on two fronts. Internally, employees are venting in public forums and private channels about how leadership rushed a military contract that the company's own CEO later called "opportunistic and sloppy." Externally, users are uninstalling ChatGPT at a rate not seen since the app launched.

What they said / What we found

  • Claim: OpenAI's Pentagon deal includes the same red lines Anthropic demanded
  • Reality: Outside observers right away questioned how the surveillance and weapons restrictions would actually be enforced - the contract language was published Saturday and lacks specifics
  • The numbers: ChatGPT uninstalls up 295%, 1-star reviews up 775%, Claude downloads up 51% - all within 48 hours of the deal announcement
  • The staff: Multiple OpenAI employees told CNN they're frustrated with how the deal was handled, with some saying they respect Anthropic more than their own employer on this issue

The Claim

Sam Altman positioned OpenAI's Pentagon contract as responsible and aligned with Anthropic's own principles. Hours before the Friday deadline, he posted publicly that he agreed with Amodei's red lines on surveillance and autonomous weapons. Then OpenAI announced its own deal, framing it as proof that a company could work with the military while maintaining ethical guardrails.

The timing was the problem. Anthropic had just been designated a supply chain risk for refusing to remove those exact guardrails. OpenAI swooping in with a deal that claimed to include the same protections looked, to many observers, like opportunism dressed up as principle.

Altman acknowledged this on Sunday, telling followers: "In retrospect, it just looked opportunistic and sloppy." He said OpenAI would renegotiate the contract terms to address concerns. That's the admission we covered Monday - but the fallout has continued to escalate since then.

The Evidence

Inside OpenAI

CNN reported Tuesday that OpenAI employees are "fuming" about the Pentagon deal. Multiple current employees spoke anonymously, describing frustration with how the contract was rushed through without enough internal discussion.

The core complaint: a deal of this magnitude - involving classified military networks and the precedent of replacing a competitor who was just blacklisted for having principles - shouldn't have been announced hours after the competitor was punished. Employees felt the sequencing made OpenAI look like it was profiting from Anthropic's stand.

One employee told CNN that many staff "really respect" Anthropic for refusing to back down on surveillance and weapons restrictions. The implication was clear: they wished their own company had done the same. OpenAI staff also signed an open letter supporting Anthropic's position.

Not all dissent was anonymous. Research scientist Aidan McLaughlin posted publicly on X Monday morning: "I personally don't think this deal was worth it." The post received nearly 500,000 views. McLaughlin said the internal discussion was "overwhelming" but added he felt "incredibly proud to work somewhere" that allowed open debate. Safety researcher Jasmine Wang called for "independent legal counsel" to analyze the new contract language. By Monday morning, chalk graffiti had appeared on the sidewalks outside OpenAI's San Francisco offices: "Where are your redlines?" and "What are the safeguards?"

This isn't the first time OpenAI's internal culture has clashed with leadership decisions. The safety team exodus earlier this year saw multiple senior researchers leave, several going directly to Anthropic. The Pentagon deal is boosting an existing tension between OpenAI's original mission and its commercial trajectory.

The Contract Questions

When OpenAI published some of the Pentagon contract terms on Saturday, outside observers immediately flagged gaps. The published terms reference restrictions on autonomous weapons and domestic surveillance, but lack enforcement mechanisms. There's no independent audit provision, no public reporting requirement, and no clear process for what happens if the Pentagon uses the models in ways that violate the stated restrictions.

The Pentagon accepted OpenAI's red lines - the same ones it rejected from Anthropic. That asymmetry raised the obvious question: if the terms are the same, why was one company blacklisted and the other rewarded? The answer, according to Anthropic's internal memo, involves political factors - specifically, the lack of "dictator-style praise" for Trump and the fact that OpenAI co-founder Greg Brockman made a $25 million donation to MAGA Inc.

The User Exodus

MetricChangePeriodSource
ChatGPT uninstalls+295% day-over-daySaturday Feb 28Sensor Tower / TechCrunch
ChatGPT 1-star reviews+775% day-over-daySaturday Feb 28App store data
ChatGPT 1-star reviews+100% additional day-over-daySunday Mar 1App store data
Claude US downloads+37% day-over-dayFriday Feb 27App analytics
Claude US downloads+51% day-over-daySaturday Feb 28App analytics
Claude App Store rankingJumped 20+ ranks to #1Saturday Feb 28Apple App Store

The baseline ChatGPT day-over-day uninstall rate over the prior 30 days was 9%. The 295% spike represents a massive deviation. TechCrunch reported the numbers first, citing Sensor Tower analytics data.

Anthropic's response was measured but pointed. A spokesperson said daily signups have broken the all-time record every day since the ban, free users increased more than 60% since January, and paid subscribers have more than doubled this year. The company did not need to draw the connection explicitly - the Pentagon ban was doing their marketing for them.

What They Left Out

The Renegotiation Question

Altman said OpenAI would renegotiate the contract. Fortune reported that he is in active discussions to amend the terms. But renegotiating a military contract after public announcement is unusual, and it is unclear what leverage OpenAI has to change terms the Pentagon already agreed to. The Pentagon got what it wanted - a frontier AI provider with no conditions. Agreeing to add conditions now would undermine the entire premise of the Anthropic punishment.

The Structural Problem

The employee frustration and user exodus are symptoms of a deeper issue. OpenAI's rapid commercialization - the record $110 billion funding round, the push to 1.6 million weekly Codex users, the Pentagon contract - is happening faster than the company's internal culture can adapt. The safety team departures, the Congressional lobbying battle with Anthropic, and now the military deal all point in the same direction: OpenAI is becoming a conventional tech giant, and not everyone who works there signed up for that.

What the Numbers Don't Show

A 295% spike in uninstalls is dramatic, but ChatGPT still has hundreds of millions of users. The boycott is loud but may be small relative to the total user base. In the same way, Claude hitting #1 on the App Store reflects a surge in new downloads, not necessarily sustained usage. The question is whether this is a news-cycle protest or a permanent shift in user loyalty.


The Pentagon deal exposed a gap between what OpenAI says it stands for and what it does when there is money on the table. The employees see it. The users see it. Altman sees it - he called it sloppy himself. The question is whether acknowledging the problem is the same as fixing it, and so far, the answer is a renegotiation that hasn't yet produced results.

Sources:

OpenAI Staff Revolt Over Pentagon Deal as Users Flee
About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.