News

Meta Goes All-In on Nvidia: Millions of Chips, Tens of Billions of Dollars, and a 5-Gigawatt Data Center

Meta and Nvidia announce a multiyear deal spanning millions of GPUs and CPUs, with Meta becoming the first to deploy Nvidia's Grace CPUs standalone at scale.

Meta Goes All-In on Nvidia: Millions of Chips, Tens of Billions of Dollars, and a 5-Gigawatt Data Center

Meta and Nvidia just made the single largest AI infrastructure commitment the industry has ever seen. The two companies announced a multiyear, multigenerational partnership on Monday that will see Meta deploy "millions" of Nvidia chips - spanning GPUs, standalone CPUs, and networking hardware - across its global data center fleet.

The deal, estimated by analysts at tens of billions of dollars, is part of Meta's broader $600 billion pledge to invest in U.S. data center infrastructure by 2028.

What the Deal Covers

The partnership spans nearly every product in Nvidia's data center lineup:

  • Millions of Blackwell and Rubin GPUs for training and inference workloads
  • Large-scale deployment of Nvidia Grace CPUs as standalone processors - a first in the industry
  • Spectrum-X Ethernet switches integrated into Meta's Facebook Open Switching System platform
  • Nvidia Confidential Computing for WhatsApp's private processing features
  • Vera CPUs with potential large-scale deployment beginning in 2027

That last bullet about Grace CPUs is the technical headline here. Until now, Nvidia's ARM-based Grace processors have only shipped paired with GPUs in Grace Hopper or Grace Blackwell configurations. Meta is the first hyperscaler to deploy them independently in production data centers, a move Nvidia says delivers significant performance-per-watt improvements for non-GPU workloads.

The Numbers Behind Meta's AI Buildout

The scale of Meta's infrastructure ambitions is staggering. Mark Zuckerberg told investors the company plans to spend $135 billion on AI infrastructure in 2026 alone. The Nvidia deal represents a massive chunk of that figure.

Two data centers anchor the buildout:

  • Prometheus in New Albany, Ohio - a 1-gigawatt facility currently under construction
  • Hyperion in Richland Parish, Louisiana - a 5-gigawatt facility that would be one of the largest data centers ever built

Meta also broke ground on a $10 billion, 4-million-square-foot facility in Lebanon, Indiana, expected to go online in late 2027 or early 2028. In total, the company has plans for 30 data centers, 26 of them in the United States.

This comes on the heels of an industry-wide chip supply crunch that has sent memory prices soaring and starved consumer electronics of components. Meta's aggressive purchasing only intensifies that pressure.

What About Meta's Own Chips?

Here is the awkward subtext of the deal: Meta has been developing its own AI accelerator, the MTIA (Meta Training and Inference Accelerator), for years. The MTIA-2 is reportedly entering broad deployment, and the MTIA-3 is slated for the second half of 2026, built on TSMC's 3nm process.

But the sheer scale of this Nvidia commitment raises questions about how central MTIA will actually be to Meta's infrastructure. The Financial Times has reported "technical challenges" with Meta's custom chips, and the company has historically scrapped at least one internal processor that failed to meet performance targets.

Ben Bajarin, an analyst at Creative Strategies, sees it differently. He told SiliconAngle that Meta's approach is "affirmation of the soup-to-nuts strategy" Nvidia pursues. Meta is not betting on a single vendor - it also buys chips from AMD and has explored Google's TPUs - but the Nvidia commitment dwarfs everything else.

"Personal Superintelligence"

Zuckerberg framed the partnership in characteristically ambitious terms: "We're excited to expand our partnership with Nvidia to build leading-edge clusters using their Vera Rubin platform to deliver personal superintelligence to everyone in the world."

Nvidia CEO Jensen Huang matched the tone: "No one deploys AI at Meta's scale - integrating frontier research with industrial-scale infrastructure to power the world's largest personalization and recommendation systems for billions of users."

The "personal superintelligence" language is notable. Meta has been pushing the concept of personalized AI agents that understand individual users deeply enough to act on their behalf. That requires enormous compute - both for training foundation models and for running inference at scale across billions of users on platforms like WhatsApp, Instagram, and Facebook.

The WhatsApp angle is particularly interesting. Nvidia's Confidential Computing technology will power "private processing" features that let AI analyze messages without exposing their content to Meta or Nvidia. The companies say they are already collaborating on expanding these privacy-preserving capabilities beyond WhatsApp to other Meta products.

What This Means for the Market

This deal cements several trends that have been building for months.

First, Nvidia's dominance in AI infrastructure is not weakening - it is deepening. Despite well-funded challengers from AMD, Intel, Google, and startups like Cerebras and Groq, the biggest buyers keep doubling down on Nvidia's full stack. When a company with Meta's engineering resources still needs millions of Nvidia chips, that tells you something about the competitive landscape of AI compute.

Second, the ARM CPU transition in data centers just got its biggest endorsement. Grace CPUs running standalone in Meta's fleet signals that ARM is no longer just a mobile architecture. If this deployment succeeds, expect other hyperscalers to follow.

Third, the gap between open-source and proprietary AI may narrow further. Meta remains the industry's most prominent backer of open-source models through its Llama family. More compute means bigger, better open-weight models that compete with proprietary offerings from OpenAI and Anthropic.

The Bottom Line

Meta is building the physical layer for a future where AI agents are embedded in every product the company offers. The Nvidia partnership gives it the GPUs to train frontier models, the CPUs to serve them efficiently, and the networking to tie it all together at a scale no other company except possibly Google can match.

Whether the $600 billion bet pays off depends on whether "personal superintelligence" turns out to be something people actually want - and whether Meta can ship it before the competition does.

Sources:

About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.