Meta Signs Multibillion-Dollar Deal to Rent Google TPUs - Completing a Three-Way Chip Strategy
Meta has agreed to rent Google's Ironwood TPUs through Google Cloud to train next-generation AI models, adding a third major chip supplier alongside Nvidia and AMD in a single month.

TL;DR
- Meta has signed a multibillion-dollar, multiyear deal to rent Google's Ironwood TPUs through Google Cloud
- The chips will be used to train and run next-generation large language models, not just inference
- Meta is also in talks to buy TPUs outright for its own data centers as early as 2027
- Google is forming a joint venture to lease TPUs to other AI customers, directly challenging Nvidia
- This is Meta's third major chip deal in 10 days - after Nvidia (tens of billions) and AMD (up to $100 billion)
- Meta's total 2026 AI capex is projected at $115-135 billion
In the span of ten days, Meta has committed to buying or renting chips from every major AI hardware supplier on the planet. The latest move, reported by The Information on Wednesday, is a multibillion-dollar deal to rent Google's tensor processing units through Google Cloud. Meta will use the TPUs to train and run its next-generation large language models - a workload that has historically belonged almost exclusively to Nvidia GPUs.
The deal is structured as a cloud rental arrangement, with Meta accessing Google's most advanced Ironwood TPUs (the seventh-generation chip launched in November) through Google Cloud infrastructure. Separately, Meta is in discussions to purchase TPUs outright for installation in its own data centers, though no agreement on that front has been reached.
Neither company has officially confirmed the deal's exact value or duration. But the timing and structure tell a clear story about where Meta's infrastructure strategy is heading.
The Financial Picture
To appreciate what this deal means, look at what Meta has committed to in February alone:
| Deal | Announced | Estimated Value | Scope |
|---|---|---|---|
| Nvidia partnership | Feb 17 | Tens of billions | Millions of Blackwell/Rubin GPUs, Grace CPUs, Spectrum-X networking |
| AMD partnership | Feb 24 | Up to $100 billion | 6 GW of Instinct GPUs, 160M share warrant (~10% stake) |
| Google TPU rental | Feb 26 | Multibillion (undisclosed) | Ironwood TPUs via Google Cloud, potential direct purchase in 2027 |
Meta told investors in January it expects to spend $115-135 billion on AI infrastructure in 2026 - nearly double the $72 billion it spent last year. The Nvidia and AMD deals alone likely account for a major portion of that figure. The Google arrangement adds another layer of compute capacity on top.
This isn't a company shopping around. This is a company buying everything available.
Who Benefits
Meta gets three things it couldn't get from any single supplier. First, supply security. Nvidia can't manufacture enough chips to satisfy every hyperscaler simultaneously, and the global AI chip supply crunch has made single-vendor dependency a genuine operational risk. Second, negotiating leverage. When your largest supplier knows you have two other suppliers capable of handling frontier workloads, pricing conversations change. OpenAI reportedly negotiated 30% lower prices from Nvidia simply because TPUs existed as a credible alternative. Third, technical flexibility. TPUs are purpose-built for machine learning workloads and offer competitive performance-per-watt - Google's Ironwood chip delivers roughly 2.8x better energy efficiency than Nvidia's H100, and matches or slightly beats the newer Blackwell B200 in raw compute at 4.6 petaFLOPS per chip.
Google benefits even more dramatically. Until now, TPUs were effectively a proprietary advantage - available to Google's internal teams and select Google Cloud customers, but not widely commercialized. Selling TPU access to the company that trains Llama, the world's most widely deployed open-weight model family, is a massive validation of the chip as a general-purpose AI training platform. Google is also forming a joint venture with an unnamed institutional partner to lease TPUs to other customers. Some Google Cloud executives estimate that expanded TPU sales could capture as much as 10% of Nvidia's annual revenue - roughly $20 billion at current run rates.
AMD benefits indirectly. Meta's willingness to diversify across three suppliers reinforces the market logic behind AMD's own $100 billion Meta deal. If Meta is spreading its bets, AMD's seat at the table is more secure, not less.
Who Pays
Nvidia does not lose revenue from this deal directly - Meta is still buying millions of Nvidia chips. But Nvidia loses something harder to quantify: pricing power. The record $68 billion quarter Nvidia just reported was built partly on the fact that hyperscalers had limited alternatives for frontier training. That is no longer true. Meta's three-vendor strategy, combined with its in-house MTIA silicon program, creates a competitive dynamic where Nvidia must compete on price and performance rather than scarcity.
Google's internal teams face a subtler cost. Every Ironwood TPU allocated to Meta through Google Cloud is a TPU that's not available for Google's own model training. Google has historically guarded its TPU supply closely, focusing on internal workloads for Gemini and other frontier models. Commercializing TPUs at scale means Google must either expand production capacity significantly or accept trade-offs in its own compute allocation. The joint venture structure suggests Google is solving this by raising external capital to fund additional TPU production - but scaling semiconductor supply is measured in years, not months.
Shareholders of all three companies are the ultimate backstop. Meta's $650 billion collective Big Tech AI spending spree is premised on the assumption that AI compute will produce returns proportional to the investment. Bridgewater Associates warned just last week that the AI capex cycle has entered a "dangerous phase" where spending is outpacing proven revenue models. Meta's three-deal February does nothing to address that concern.
What Happens Next
The deal's immediate practical impact depends on which workloads Meta moves to TPUs. Training next-generation Llama models on Google hardware would be a genuine technical milestone - and a signal that TPUs have crossed the threshold from cloud inference tool to frontier training platform.
The longer-term question is whether Meta follows through on purchasing TPUs outright. Renting through Google Cloud gives Meta flexibility but keeps Google in control of the hardware. Buying chips and installing them in Meta's own data centers - the 30 facilities including the 5-gigawatt Hyperion campus in Louisiana - would represent a deeper structural commitment and a more direct threat to Nvidia's installed base.
Meta's compute strategy now spans four vectors: Nvidia for the incumbent GPU stack, AMD for next-generation inference at scale, Google TPUs for training flexibility, and MTIA custom silicon for recommendation workloads. No other company in the industry is pursuing all four simultaneously at this scale.
Whether that's visionary diversification or an expensive hedge against uncertainty depends completely on whether "personal superintelligence for 3.3 billion users" turns out to be a product people will pay for - or just the most expensive insurance policy in corporate history.
Sources
- Google Strikes Multibillion-Dollar AI Chip Deal With Meta - The Information
- Google and Meta reportedly strike new, multibillion-dollar AI chip deal - SiliconANGLE
- Meta Signs Multibillion-dollar Deal To Rent Google TPUs - Dataconomy
- Meta signs multi-billion dollar deal to rent Google's TPUs - The Decoder
- Meta expands Nvidia deal to use millions of AI chips - CNBC
- Meta estimates 2026 capex to be between $115-135bn - Data Center Dynamics
