Meta Commits $21B More to CoreWeave, Total Hits $35B

Meta expands its CoreWeave partnership by $21 billion through December 2032, bringing total commitments to $35 billion and locking in early NVIDIA Vera Rubin deployments.

Meta Commits $21B More to CoreWeave, Total Hits $35B

Meta signed another massive check for GPU capacity. On April 9, CoreWeave announced an expanded agreement with Meta worth approximately $21 billion, covering new AI cloud capacity through December 2032 and including some of the earliest commercial deployments of NVIDIA's Vera Rubin platform. The two companies had already agreed to a roughly $14.2 billion deal in September 2025. Combined, Meta has now committed around $35 billion to CoreWeave - making it by far the most significant commercial relationship in the GPU cloud provider's history.

TL;DR

  • Meta commits ~$21B to CoreWeave for AI cloud through December 2032
  • Previous deal was ~$14.2B; total relationship now ~$35B
  • Deal covers inference workloads, not training - primarily for serving Llama models at scale
  • Includes early NVIDIA Vera Rubin platform deployments (H2 2026)
  • CoreWeave issued $4.25B in new debt alongside the announcement

The September 2025 deal established CoreWeave as a credible alternative to the hyperscale cloud providers. This expansion turns that relationship into something structural. Through 2032, Meta will rely on CoreWeave's infrastructure for a meaningful share of its AI serving capacity - a commitment that outlasts most of the industry's current GPU roadmap.

September 2025 DealApril 2026 Expansion
Committed value~$14.2B~$21B
Coverage periodThrough ~2031Through December 2032
Hardware generationNVIDIA BlackwellNVIDIA Vera Rubin
Primary workloadMixedInference

CoreWeave data center with liquid-cooled server infrastructure CoreWeave's data centers are purpose-built for high-density AI compute, with liquid cooling designed for GPU racks consuming up to 130kW each. Source: coreweave.com

Who Benefits

CoreWeave's Revised Revenue Picture

The deal reshapes CoreWeave's business case in a direct way. Before this announcement, Meta's existing contracts were already the company's largest customer relationship. Now, with $35 billion committed across two agreements, the numbers are substantial enough that CoreWeave CEO Michael Intrator said no single customer would account for more than 35% of total sales - a threshold that implies CoreWeave's total contracted revenue has grown notably since the first Meta deal was signed.

CoreWeave ended 2025 with 850MW of active power across 43 data centers and is targeting over 1.7GW by year-end 2026. The Meta commitment gives that expansion plan a guaranteed revenue base. The company's $30-35 billion capex plan for 2026 is largely tied to already-signed customer contracts, and the Meta deal is the biggest anchor in that structure.

"Leading companies are choosing CoreWeave's AI cloud to run their most demanding workloads," Intrator said in the announcement.

NVIDIA Gets Early Validation

The Vera Rubin platform is set for commercial availability in the second half of 2026. Meta's deployment through CoreWeave - one of NVIDIA's designated cloud partners for the platform - gives the new architecture a large-scale inference workload to prove itself against before it reaches general availability. Vera Rubin NVL72 racks pack 72 Rubin GPUs and 36 Vera CPUs into a single liquid-cooled system, delivering 260 terabytes per second of interconnect bandwidth - roughly 10x lower cost per token compared to the prior Blackwell generation for mixture-of-experts inference.

That benchmark matters because Meta's Llama models are predominantly MoE-based, and the deal is explicitly structured around serving those models to users, not training new ones.

Who Pays

Meta's Infrastructure Math

Meta guided for $115 billion to $135 billion in capital expenditures for 2026 - its largest annual infrastructure commitment and roughly 50% above what analysts had projected heading into the year. The CoreWeave relationship accounts for a portion of that total, though Meta is simultaneously building or expanding its own data centers in Indiana, Texas, and Louisiana, and has signed infrastructure agreements with Nebius ($27 billion), AMD, and others.

The company's own description of the situation is plain: it's "capacity constrained." Demand for compute resources across Meta's advertising systems, AI features, and Llama research has consistently outpaced supply. Locking in CoreWeave's capacity through 2032 is less a strategic preference for outsourced GPU rental and more a practical response to not being able to build owned facilities fast enough.

As we covered in our reporting on Meta's $27 billion Nebius deal, the company has been building a diversified external GPU base with its own infrastructure - a hedge against supply bottlenecks and a way to flex capacity without waiting for construction timelines.

NVIDIA Vera Rubin NVL72 rack system displayed at GTC 2026, showing liquid-cooled GPU architecture The NVIDIA Vera Rubin NVL72 integrates 72 Rubin GPUs and 36 Vera CPUs in a single liquid-cooled rack. CoreWeave will deploy these systems for Meta's inference workloads in the second half of 2026. Source: servethehome.com

The Debt Question

One detail that didn't make the headline: CoreWeave issued $4.25 billion in new convertible notes and high-yield debt on the same day the Meta deal was announced. That's not unusual for a capital-intensive infrastructure company expanding quickly, but it connects to a pattern we flagged in our coverage of CoreWeave's Q4 2025 earnings - the company's ability to convert contracted revenue into real cash flow remains the central question for investors. A $35 billion commitment from Meta is a large number on paper. Whether CoreWeave can build, staff, and operate the infrastructure that fulfills it, while managing a 894% debt-to-equity ratio, is a separate question.

The Inference Calculus

The deal is specifically framed around inference, not training, and that framing is worth taking seriously. Meta's Llama model family is open-weight - anyone can download the weights. The capital-intensive training phase is largely complete before any cloud contract is signed. What costs money at scale is serving the models to billions of users in real time.

CoreWeave's architecture is designed for exactly that workload: GPU-dense clusters optimized for latency-sensitive inference rather than the long-horizon, high-throughput jobs that dominate training. For Meta, the math is that outsourcing a portion of that inference capacity to CoreWeave - locked in at today's contracted rates through 2032 - is cheaper than building enough owned capacity to absorb peak demand. For CoreWeave, Llama inference volume is one of the most predictable workloads in the industry. The Vera Rubin platform's arrival in H2 2026 will test whether the hardware lives up to the cost-per-token claims that made this contract worth signing at $21 billion.


The deal is a vote of confidence in CoreWeave's operational track record and a bet that NVIDIA's next GPU generation will deliver on its cost-reduction claims at production scale - both of which remain to be confirmed by real workloads later this year.

Sources:

Meta Commits $21B More to CoreWeave, Total Hits $35B
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.