Google Backs Anthropic With $40B and 5 Gigawatts

Google commits up to $40 billion to Anthropic alongside five gigawatts of cloud compute, making it the largest single infrastructure bet in AI history.

Google Backs Anthropic With $40B and 5 Gigawatts

Google just committed five gigawatts and forty billion dollars to a company it technically competes with every day.

On April 24, the search giant announced plans to invest up to $40 billion in Anthropic - an AI lab whose Claude models go head to head with Google's own Gemini in coding, writing, and enterprise AI. The deal pairs cash with what may be the single largest compute commitment in AI history: five gigawatts of Google Cloud capacity over five years.

This isn't a financial bet. This is a compute arms deal.

TL;DR

  • $10 billion upfront at $380 billion valuation; $30 billion more contingent on performance milestones
  • Five gigawatts of Google Cloud compute committed over five years
  • Expands the April 7 three-way deal with Broadcom that added 3.5 GW of TPU capacity (operational 2027)
  • Anthropic's total annual revenue run rate: $30 billion, up from $9 billion at end of 2025
  • Follows Amazon's $25 billion infrastructure deal four days earlier

The Deal, Disassembled

The Money

Google is writing a $10 billion check today at Anthropic's current valuation of $380 billion - up from the $350 billion at which Anthropic ran its employee share sale in February. The remaining $30 billion is contingent, tied to unspecified "performance targets" that neither company has disclosed publicly.

The relationship isn't new. Google's first stake in Anthropic was a $300 million investment in 2023 for roughly 10%, followed by another $2 billion top-up. Before this announcement, Google held more than $3 billion in Anthropic - already substantial by any normal measure. This round adds $10 billion on day one and could nearly triple that depending on how the next few years go.

That "performance targets" clause matters for anyone looking at this as a pure valuation story. The headline number is $40 billion. The number Google is actually obligated to deliver today is $10 billion.

The Compute

The cash is almost secondary. What Anthropic needs right now isn't money - it is the physical infrastructure to run a $30 billion ARR business that's growing faster than any single cloud provider can provision.

# Google Cloud - Anthropic Compute Commitment
# Announced: April 24, 2026

commitment_type  : infrastructure + investment
compute_total    : 5,000 MW (5 GW) over 5 years
chip_type        : Google TPUs (next-generation)
operational      : phased build-out starting 2027
scale_option     : "several additional GW available"
us_share         : majority located in United States

# April 7 Broadcom three-way deal (context)
broadcom_deal    : 3.5 GW from Broadcom TPU supply
google_role_apr7 : customer of Broadcom chip supply
activation       : 2027

That 3.5 GW from the April 7 Broadcom deal is already baked in. Today's 5 GW from Google Cloud stacks on top of it. The total compute pipeline Anthropic now has access to from Google alone is 8.5 gigawatts before you count anything from Amazon.

Google's New Albany data center campus in Ohio Google's New Albany data center campus in Ohio - part of the infrastructure backing Anthropic's compute expansion. Source: google.com

The Infrastructure Stack

Anthropic now has a three-provider compute architecture taking shape, secured in less than three weeks. The Amazon deal announced April 20 came with its own five gigawatts - on AWS Trainium chips instead of TPUs - plus a $100 billion AWS spend commitment over 10 years.

ProviderInvestmentComputeChipOperational
Amazon (AWS)$25B + $100B spend5 GWTrainiumBuilding out
Google Cloud$40B (up to)5 GWTPUPhased from 2027
BroadcomN/A (chip supply)3.5 GWCustom TPU2027
Total~$65B committed13.5 GWMixed2027+

For comparison, the US electrical grid delivers around 1,000 GW to the entire country. Anthropic's committed compute pipeline - from just three partners - is 13.5 GW of AI accelerator power. That's not a startup's compute budget. That is national-scale infrastructure.

Server racks in a modern data center Modern data center infrastructure comparable to what Google Cloud is committing to Anthropic's AI workloads. Source: pexels.com

What Google Gets

The obvious question: why would Google pour $40 billion into a direct competitor?

Strategic Insurance

Google can't afford to let Anthropic fall into someone else's hands or hit a capital wall. A Microsoft-style acquisition of Anthropic - where a rival controls Claude and its distribution - would damage Google in enterprise and developer markets where Claude is already the dominant model for coding tasks. Google acknowledged internal concern about its "lower position in the market for AI coding" relative to Anthropic. This deal is partly buying peace of mind.

Cloud Revenue Recovery

Anthropic's revenue is now $30 billion annually. That spend has to go somewhere - and a significant portion flows right back to cloud providers as infrastructure costs. If Anthropic runs mainly on Google Cloud, Google recovers revenue on the back end. The compute commitment is not charity - it's a customer acquisition play at scale.

The Mythos Angle

Per earlier reporting, Google has rare access to Anthropic's unreleased Mythos model - described as too capable for wide release in its current form. That level of access to frontier research suggests the relationship goes deeper than a financial stake. Whether Mythos access was formalized in this deal isn't confirmed publicly.

Hedging Gemini Risk

Google's own Gemini models are strong performers in benchmarks, but Claude consistently outperforms them in developer surveys and enterprise evaluations for writing and coding work. Backing Anthropic is an implicit acknowledgment that having multiple strong AI labs in its corner - rather than betting everything on Gemini - is the safer structural position.

Where It Falls Short

The $30B Is Conditional

The headline is $40 billion. The obligation is $10 billion. Nobody has explained what "performance targets" means in practice. If Anthropic hits a wall in model development, those targets may prove harder to reach than they look today. The conditional structure is standard venture mechanics, but at this scale it deserves more scrutiny than it has received.

The Valuation Math

At $380 billion and $30 billion ARR, Anthropic is trading at roughly 12.7x revenue. That's an extraordinary multiple for a company that still depends entirely on third-party compute and has no proprietary chip fabrication. For context, NVIDIA - the company that actually builds the hardware making all of this possible - trades at lower revenue multiples in some analyst models. The valuation assumes sustained hypergrowth and expanding margins, neither of which is guaranteed.

No Model Convergence

Google and Anthropic remain competitors. The investment doesn't change the fact that Gemini and Claude go to the same enterprise procurement meetings every day. There is no licensing arrangement, no model merger, no coordinated product roadmap. Two strong models and two strong sales teams will keep competing - Google is just also now Anthropic's landlord and biggest backer simultaneously.

Regulatory Surface

The Microsoft-OpenAI partnership attracted significant antitrust scrutiny in the UK, EU, and US. A $40 billion Google investment in the second-most-powerful AI lab in the world will face the same review. Nothing has been blocked yet, but the overlap between Google's cloud market power and Anthropic's AI distribution creates the kind of competitive concentration that regulators have shown interest in checking.


Sophie's read on this: the money is almost beside the point. What Google is actually buying is a seat at the table during the infrastructure buildout that will determine who runs AI workloads at scale for the next decade. The compute commitment - five gigawatts, phased over five years - is the real deliverable. Cash can be raised. Gigawatts can't be conjured overnight. Both Google and Amazon are now structurally aligned with Anthropic's success in a way that's difficult to unwind. That is a different kind of lock-in from anything we've seen in enterprise software - and it runs in both directions.

Sources:

Sophie Zhang
About the author AI Infrastructure & Open Source Reporter

Sophie is a journalist and former systems engineer who covers AI infrastructure, open-source models, and the developer tooling ecosystem.