Uber Burned Its Entire 2026 AI Budget by April
Uber's CTO admits the company has exhausted its full-year AI spending allocation in four months, driven by runaway Claude Code adoption across 95% of engineers.

Uber's chief technology officer has conceded that the company's 2026 AI tooling budget is already gone. Four months into the fiscal year, the ride-hailing giant has blown past what it had set aside for Anthropic's Claude Code and other agentic coding tools, and is now rebuilding the spend plan from scratch.
TL;DR
- Uber CTO Praveen Neppalli Naga says the full 2026 AI budget is exhausted by April, roughly four months into the fiscal year.
- Claude Code is the dominant internal tool, with adoption surging since late 2025 while Cursor has plateaued.
- Monthly API cost per engineer ranges from $500 to $2,000; 95% of Uber engineers now use AI tools monthly.
- 11% of live backend code updates and up to 70% of committed code now come from AI.
- Total 2026 R&D spend sits at $3.4 billion, a 9% increase, with further growth now flagged.
The Reckoning in Numbers
The math is unflattering for anyone who priced agentic coding as a tidy SaaS line item. What Uber actually booked this quarter is below.
| Metric | Figure | Notes |
|---|---|---|
| 2026 R&D spend to date | $3.4B | 9% year-on-year increase |
| Time to budget exhaustion | 4 months | Plan was 12 |
| Engineers using AI tools monthly | 95% | Mostly Claude Code + Cursor |
| Monthly API spend per engineer | $500 - $2,000 | Token-based, not per-seat |
| AI-authored backend code updates | 11% | Live production commits |
| Committed code from AI | Up to 70% | Inside IDE-assisted workflows |
| Dominant tool | Claude Code | Cursor plateaued, Codex in testing |
Sources: Uber disclosures, Reuters, The Information, Benzinga.
"I'm back to the drawing board, because the budget I thought I would need is blown away already."
- Praveen Neppalli Naga, CTO, Uber
Where the Money Went
The Adoption Curve Outran the Forecast
Uber rolled Claude Code out to engineers in late 2025 and stood up internal leaderboards ranking developers by usage. By February 2026, Claude Code usage had nearly doubled against the rollout baseline. By April, 84% of Uber developers were classified as agentic-coding users in internal telemetry. Cursor usage, by contrast, plateaued in the same window.
That's a rare kind of adoption for an enterprise tool. It's also the mechanism by which the budget disappeared: Uber didn't buy licenses, it bought tokens.
Token Pricing Does Not Scale Linearly
Claude Code bills on tokens consumed, not per-seat. An engineer running a single agent against a moderate repo is a $500-a-month problem. The same engineer running multiple parallel agents, long-context refactors, and test-generation pipelines rolls into the $2,000-a-month bracket. When 95% of an engineering org is doing the latter pattern, the aggregate bill curves upward faster than headcount.
That cost profile is exactly what the per-seat AI coding CLI comparison market doesn't surface on its pricing pages. Most vendors advertise a flat tier; the bill shows up through a different door.
The Productivity Story Is Real
This isn't a story of waste. Eleven percent of Uber's live backend code updates are now AI-authored, and inside IDE-assisted workflows the figure climbs toward 70%. Naga has publicly floated "agent engineers" - autonomous systems handling coding, testing, and deployment with human supervision - as the next step. Engineers found the tools so valuable that capping usage would have felt like a productivity tax. Leadership made the call to keep the taps open and eat the overrun.
Counter-Argument: The Spend Is the Signal
The bull case is simple. Uber spent $3.4 billion on R&D in 2026, up 9% year-on-year. If AI coding tools have shifted even a mid-single-digit percentage of that spend from human engineering hours to compute, the return is enormous on a labor-cost basis. A senior engineer in San Francisco costs the company meaningfully more per year than $2,000 a month in Claude tokens. An over-budget line item that raises throughput on the most expensive resource in the building isn't actually a budget failure; it is a misclassified capex.
There's also a strategic read. Uber is one of the largest non-AI-native enterprises publicly quantifying its agentic coding adoption. The company's disclosure effectively becomes a reference point for every CFO asking what "real" AI productivity looks like. That's worth something on its own.
What the Market Is Missing
The narrative has settled on "Uber blew its budget." The more uncomfortable read is that Uber was the first large public company willing to say out loud what every other CFO is discovering quietly: agentic coding destroys the planning assumptions that enterprises used to build 2026 budgets in late 2025. Per-seat was the wrong unit. Token-consumption workloads with parallel agents and multi-minute reasoning chains are priced for experiments, not for 95%-adopted daily use. Anthropic, and by extension Claude Opus 4.7 and its tooling stack, will either absorb this through pricing redesign or lose ground to cheaper substitutes that emerge as open-source agentic runners mature.
Uber's overrun is the first public data point in a much longer argument about what enterprise AI actually costs when it works. The companies that file similar admissions in the next two quarters will tell us whether this was an outlier or a template.
Sources:
- Uber's Anthropic AI Push Hits a Wall - Benzinga
- Uber Spends Full 2026 AI Budget in 4 Months - Briefs.co
- Uber CTO Shows How Claude Code Can Blow Up AI Budgets - The Information
- Uber Burned Its Entire 2026 AI Budget in Four Months - Humai
- Uber Blows Through AI Budget as Claude Code Adoption Surges - The Agent Times
