AI Labs Are Losing Billions - Here's Who Really Pays
OpenAI burned $2.5B in cash on $4.3B of revenue in the first half of 2025. Anthropic cut its gross margin forecast from 50% to 40%. Here's the compute subsidy math behind every AI subscription, and who's actually paying for it.

In the first six months of 2025, OpenAI took in $4.3 billion in revenue and burned through $2.5 billion in cash - a rate of loss that works out to roughly $14 million every day, and OpenAI is the best-selling AI lab in the Western world.
TL;DR
- $13.5B+ projected OpenAI operating loss for 2025, rising to ~$74B by 2028 per internal documents leaked to Fortune
- $80B Anthropic's total projected cloud commitment through 2029 against a $9B-$30B ARR run-rate
- $1B per month xAI's reported cash burn rate, with $7.8B spent in the first nine months of 2025
- 70% vs 33% OpenAI's "compute margin" (excluding training) vs its actual GAAP gross margin
- $200 ChatGPT Pro monthly price that CEO Sam Altman confirmed is still losing the company money
The headline number investors keep quoting is revenue growth. That number is real. OpenAI crossed $20 billion in annualized revenue in 2025. Anthropic's annualized run rate hit $30 billion in April 2026, up from $9 billion at the end of December. The problem is that neither of those figures is near enough to cover what the companies are spending.
The Money Picture, Lab by Lab
| Lab | 2025 Revenue | 2025 Loss | Committed Compute | Notes |
|---|---|---|---|---|
| OpenAI | ~$13B (H1: $4.3B) | $13.5B+ operating loss | $300B Oracle + Stargate $500B | ChatGPT Pro confirmed loss-making |
| Anthropic | ~$4.5B | ~$5.2B EBITDA loss | $80B total through 2029 | Margin forecast cut 50% to 40% |
| xAI | Undisclosed | ~$13B projected | $18B+ Colossus 2 | Roughly $1B/month burn rate |
| Alphabet (Gemini) | Bundled in Cloud P&L | Segment-level | $175-185B 2026 capex guide | Owns TPU stack, lower COGS |
| Meta (Llama) | $0 direct (free model) | N/A | $115-135B 2026 capex | Ad-targeting subsidy, not subscribers |
Sources: TechStartups summary of The Information on OpenAI H1; Fortune on OpenAI's 2028 projection; Techmeme summary of The Information on Anthropic margin cut; PYMNTS on Anthropic's $80B cloud bill; Tom's Hardware on xAI; CNBC on Alphabet; Yahoo Finance on Meta's 2026 capex guide.
OpenAI CEO Sam Altman confirmed on X in January 2025 that ChatGPT Pro at $200 per month is losing the company money because users are consuming far more tokens than he expected.
Source: commons.wikimedia.org
What the Numbers Say
The Compute Margin Trick
OpenAI has popularized a non-GAAP metric called "compute margin" - revenue minus the cost of running tokens through a trained model. It strips out training costs, research salaries, and data-center capex. By that measure, OpenAI's margin has climbed from roughly 35% in January 2024 to 52% by year-end 2024 and hit around 70% in October 2025, per SaaStr's read of Bloomberg-sourced internal figures.
That number is accurate on its own terms. It is also not what any accountant would call gross margin. Once training amortization and model-maintenance costs are folded back in, an analyst model from FutureSearch puts OpenAI's real GAAP gross margin at about 33%, with free users consuming most of the spread between profitable paying tiers and the overall loss.
Anthropic travelled the same road from a different starting point. The Information reported in January 2026 that Anthropic cut its 2025 gross margin guidance from about 50% to 40% because inference costs came in roughly 23% higher than budgeted, driven largely by long-context Claude Code workloads. Factor in free Claude.ai users and the blended figure drops closer to 38%.
Which Tiers Actually Lose Money
Not every product tier is a loss. The pattern looks like this:
- Free tiers everywhere: pure acquisition spend, no direct revenue.
- Consumer subscriptions like ChatGPT Plus at $20 or Gemini Advanced at $20: profitable on compute margin, break-even or loss-making on fully-loaded GAAP once training is allocated.
- Premium subscriptions like ChatGPT Pro at $200: Altman publicly stated these lose money.
- Claude Max 20x at $200: community math suggests the heaviest Claude Code users can burn $15,000 or more of equivalent API usage per month on the plan, making it at best near break-even.
- Enterprise API and enterprise subscriptions: every lab says these carry the best margins. Fortune confirms OpenAI sees better unit economics on business sales, and more than 80% of Anthropic's revenue comes from enterprise.
The Training Pit
The cost that dwarfs everything else isn't inference - it's training. OpenAI reportedly spent around $20 billion on training compute in 2025. Anthropic spent roughly $5 billion. xAI purchased 300,000 Nvidia GPUs for Colossus 2 at a reported $18 billion. These figures don't appear in compute-margin calculations because trained models are capitalized and amortized, but they're the single biggest reason the all-in P&L is so much worse than the per-token P&L.
The forward commitments are larger still. OpenAI's $300 billion Oracle contract starting 2027 is the largest cloud deal ever disclosed. The Stargate project adds another $500 billion over four years. Anthropic committed more than $100 billion of AWS spend over a decade plus $30 billion of Azure in November 2025. HSBC estimated in late 2025 that OpenAI alone will need to raise an additional $207 billion to cover its plan through 2030.
Training compute, not inference, is the line item that swamps every other cost. OpenAI's projected $74B operating loss in 2028 is mostly next-generation model training plus the datacenters to run it.
Source: pexels.com
Why They Are Willing
No lab is burning cash because it has to. Every major incentive points in this direction.
- Land-grab. Paying users now become the installed base for ads, agents, and enterprise contracts later. Free users are acquisition spend, not waste.
- Data flywheel. Every user interaction feeds the next round of RLHF, evaluations, and product refinement. This data is treated as strategic.
- Distillation defense. Fair prices on output tokens let rivals train cheap models on your outputs. Rate limits and tiered pricing slow that.
- Ecosystem lock-in. MCP at Anthropic, the Responses API and Codex at OpenAI, long-context tool-use formats: whoever defines the plumbing wins the switching costs.
- The AGI bet. All three labs publicly believe compute at scale is a probabilistic path to human-expert AI. If they're right, every dollar spent now is a dollar against a first-mover position in the biggest software market in history.
What the Numbers Don't Say
Four things make the "they're all doomed" framing too clean.
Per-token economics improve fast. Epoch AI found that the compute cost to achieve any given capability drops 5 to 10 times per year. A GPT-5.2 run scoring 27% on FrontierMath in December 2025 used roughly 5 million output tokens; an o4-mini run on the same benchmark in April 2025 used 43 million.
Hardware curves compound. Nvidia's GB200 NVL72 is roughly 30 times more efficient than H100 on some inference workloads and about 2x performance-per-dollar, per SemiAnalysis. Add speculative decoding, paged KV caches, and MoE sparsity and the marginal cost per token is falling quickly.
Google plays a different game. Alphabet owns the TPU stack end-to-end via Broadcom. Gemini's inference cost is widely estimated at under 40% of what OpenAI pays Microsoft or Oracle at retail compute prices. That gap doesn't show in segment reporting, but it's a real structural advantage.
Revenue compounding is real. Anthropic went from $1 billion ARR to $30 billion in roughly 15 months. OpenAI projects $280 billion of revenue by 2030. If that rate holds, the gap between revenue and compute closes without any price increase.
The Counter-Story
Meta tells a third story entirely. Its $115-135 billion 2026 capex guide is among the largest in the industry, but Llama ships free and Zuckerberg justifies the spend via ad-targeting lift, not via a $200-a-month subscription. Meta isn't subsidizing inference for subscribers. It's subsidizing the open-source ecosystem that commoditizes its rivals' moats.
Chinese labs tell a fourth. DeepSeek prices R1 at $0.55 per million input tokens and $2.19 per million output - a 20x to 30x discount to OpenAI's flagship reasoning pricing. SemiAnalysis pushed back on the "$6M trained" narrative and estimated DeepSeek has spent over $1.6 billion on infrastructure, including $500 million on Nvidia chips. The gap is not about pricing below marginal cost - it's about not carrying the $500 billion-valuation overhead of a Western lab.
Anthropic CEO Dario Amodei told the Dwarkesh podcast in January 2026 that the labs are "near the end of the exponential" - not a scaling wall claim but a claim that AGI-level capability is 1-2 years away.
Source: commons.wikimedia.org
"We are near the end of the exponential."
- Dario Amodei, Dwarkesh Podcast, January 2026. The implication for Anthropic's investors: the subsidy has a finite runway, and the payoff is on the horizon.
So What?
If you're paying for AI as a user or a business buyer, three things follow from the math.
First, your tokens aren't priced for profit today. They're priced against the AGI bet. Rate hikes are more likely than discounts at the premium end of the market over the next 18 months, especially for long-context reasoning workloads. Anthropic already raised Claude Opus pricing in 2025 once inference ran over budget.
Second, free tiers aren't going away, but they're not improving at the same pace as paid either. Free users remain strategically valuable. The subsidy flowing their way is just compressing as the labs push paying-user counts.
Third, the labs most likely to survive any funding tightening are the least dependent on pure subscription economics: Google (TPU stack, ad revenue), Meta (ad-driven, open weights), and Anthropic (enterprise-heavy with the fastest margin-improvement trajectory). The most exposed is xAI, followed by smaller labs without major cloud partnerships.
None of this predicts collapse. All three headline labs assume continued capital access, and the Series G rounds of 2026 show they still have it. The math just says this: whatever you pay for AI today, someone else is paying more so you can pay it. The subsidy is real. It's just well-camouflaged.
Sources:
- OpenAI H1 2025: $4.3B revenue, $2.5B cash burn - TechStartups
- OpenAI cash burn rate and 2028 forecasts - Fortune
- OpenAI is losing money on ChatGPT Pro - TechCrunch
- Microsoft 10-Q implies $11.5B OpenAI quarterly loss - CNBC
- OpenAI $300B Oracle cloud deal - TechCrunch
- Anthropic's $80B cloud spend through 2029 - PYMNTS
- Anthropic lowers profit margin projection as revenue skyrockets - Techmeme summary
- OpenAI 70% compute margin - SaaStr
- OpenAI GAAP gross margin analysis - FutureSearch
- xAI is losing $1B/month - Tom's Hardware
- Alphabet 2026 capex guide - CNBC
- Meta AI spend and returns - CNBC
- Meta 2026 capex guide - Yahoo Finance
- Stargate Project $500B announcement - CNN
- Epoch AI: how persistent is the inference cost burden
- SemiAnalysis Blackwell perf-TCO analysis
- SemiAnalysis DeepSeek debates
- OpenAI sees better margins on business sales - Fortune
- Anthropic ARR trajectory - SaaStr
