News

Meta Strikes $100 Billion AI Chip Deal With AMD, Takes Path to 10% Stake

Meta will deploy up to 6 gigawatts of AMD Instinct GPUs across multiple generations in a deal worth up to $100 billion. AMD has issued Meta a warrant for 160 million shares - roughly 10% of the company - at a penny per share, tied to delivery milestones and AMD hitting $600.

Meta Strikes $100 Billion AI Chip Deal With AMD, Takes Path to 10% Stake

TL;DR

DetailValue
Total deal valueUp to $100 billion+
Scope6 gigawatts of AMD Instinct GPUs
DurationMulti-year (five-year stretch)
AMD stakeWarrant for 160 million shares (~10%) at $0.01/share
Full vesting triggerAMD stock reaching $600/share + delivery milestones
TechnologyCustom MI450 (CDNA 5, TSMC 2nm, 432GB HBM4)
First shipmentsH2 2026
AMD stock reaction+14% pre-market

The Deal

Meta and AMD announced an expanded strategic partnership on Tuesday that will see Meta deploy up to 6 gigawatts of AMD Instinct GPUs across multiple hardware generations. AMD CEO Lisa Su described the revenue per gigawatt as "double-digit billions," putting the total deal value in the range of $60 billion to over $100 billion depending on how infrastructure costs are factored in.

The headline number is large. The structural detail is larger: AMD has issued Meta a performance-based warrant for up to 160 million shares of AMD common stock - roughly 10% of outstanding shares - at an exercise price of $0.01 per share. The warrant vests in tranches tied to three conditions: specific Instinct GPU shipment milestones, AMD's stock reaching progressively higher targets up to $600 per share, and Meta achieving key technical and commercial milestones with the hardware.

The first tranche vests with the initial 1-gigawatt shipment. First deliveries are expected in the second half of 2026.

At AMD's pre-announcement price of ~$197, those 160 million shares are worth roughly $31 billion. At the full $600 vesting target, they would be worth $96 billion - effectively giving Meta a massive financial stake in AMD's success while giving AMD the largest guaranteed customer commitment in the company's history.

"We're excited to form a long-term partnership with AMD to deploy efficient inference compute and deliver personal superintelligence," Mark Zuckerberg said in the announcement. "This is an important step for Meta as we diversify our compute."

Lisa Su was more direct about what the deal means competitively: "Meta has a lot of choices. I want to make sure that we are always a clear seat at the table."

The Technology: MI450 on 2nm

The partnership is built around a custom AMD Instinct GPU based on the MI450 architecture - CDNA 5, manufactured on TSMC's 2nm process node. The specs are significant:

SpecMI450
ArchitectureCDNA 5
ProcessTSMC 2nm (N2)
Memory432GB HBM4
Memory bandwidth19.6 TB/s
Peak performanceUp to 40 PFLOPS (FP4)
Rack performance (MI455X Helios)2.9 exaFLOPS FP4, 31TB HBM4, 72 GPUs

Meta played an active role in defining the MI450's specifications, with the processor emphasizing inference capabilities - consistent with Zuckerberg's stated goal of "personal superintelligence" delivered to Meta's 3.3 billion users. The GPUs will be paired with 6th Gen AMD EPYC "Venice" CPUs and deployed on AMD's Helios rack-scale architecture running the ROCm software stack.

The TSMC 2nm process gives AMD a manufacturing generation advantage over NVIDIA's upcoming Vera Rubin platform, which is expected on TSMC's 3nm node. Whether that process advantage translates into a performance advantage depends on architecture efficiency - but AMD is, for the first time, ahead of NVIDIA on the foundry roadmap in the AI accelerator space.

One Week After NVIDIA

The timing is impossible to ignore. One week ago, Meta announced a multiyear deal with NVIDIA for "millions" of Blackwell and Rubin GPUs, Grace and Vera CPUs, and Spectrum-X networking - estimated at roughly $50 billion. Meta was the first hyperscaler to deploy NVIDIA's Grace CPUs standalone.

Now Meta has committed potentially double that amount to AMD.

This is not a replacement. It is a deliberate diversification strategy. Meta is building toward $115-135 billion in capital expenditure this year - nearly double the $72 billion spent in 2025 - with 30 data centers planned globally, including the 5-gigawatt Hyperion facility in Louisiana. At that scale, single-vendor dependency is not just a risk management concern. It is a supply constraint. NVIDIA cannot manufacture enough chips for everyone.

Meta's compute strategy now spans three vectors: NVIDIA (dominant incumbent), AMD (growing second source), and its own MTIA custom silicon (still developing, with the Financial Times reporting "technical challenges" on MTIA-3). The AMD deal suggests Meta is hedging its bets on in-house silicon by locking in a proven external alternative.

The OpenAI Template

The deal structure is nearly identical to the AMD-OpenAI partnership announced in October 2025: same 6-gigawatt scope, same 160 million share warrant, same MI450 architecture. Meta is AMD's second mega-deal customer in this format.

This is not a coincidence. AMD has created a repeatable deal template for hyperscale AI customers: massive volume commitments in exchange for equity upside. The warrant structure aligns AMD's financial incentives with the customer's deployment success - AMD only gets the full stock price benefit if the hardware actually ships and performs.

For AMD, the math is transformative. Two 6-gigawatt deals at "double-digit billions per gigawatt" represents a potential revenue pipeline that dwarfs AMD's current data center business. In Q4 2025, AMD's entire data center segment generated approximately $1.4 billion in quarterly revenue. NVIDIA's data center revenue in the same period was $51.2 billion - roughly 6x larger than Intel and AMD combined.

The Meta and OpenAI deals are AMD's clearest path to closing that gap.

What It Means for the GPU Market

AMD currently holds approximately 9% of the AI accelerator market versus NVIDIA's 86%. This deal does not flip that ratio overnight - the MI450 ships in H2 2026, and NVIDIA's installed base and software ecosystem remain massive advantages. But it marks a structural shift in how hyperscalers think about AI compute procurement.

The pattern emerging across Big Tech's $650 billion AI spending spree is clear: no one wants to be dependent on a single chip supplier. Microsoft has its own custom silicon plus AMD and NVIDIA. Google has TPUs plus NVIDIA. Amazon has Trainium and Inferentia plus NVIDIA. And now Meta has MTIA, NVIDIA, and AMD - with the AMD commitment potentially exceeding the NVIDIA one in dollar terms.

NVIDIA's dominance is not under immediate threat. But its pricing power is. When your largest customers are signing $100 billion deals with your primary competitor while simultaneously investing in custom silicon, the negotiating leverage shifts - even if NVIDIA's products remain technically superior.

Bank of America maintains a Buy rating on AMD with a $300 price target, projecting the company will capture "double-digit share of the trillion-dollar AI market by 2030." After today, that target looks conservative.

The Capex Arms Race, Continued

Meta's combined NVIDIA and AMD commitments now potentially exceed $150 billion. Add the MTIA program and the data center construction costs for 30 facilities including a 5-gigawatt campus, and Meta's total AI infrastructure spend through the end of the decade could approach the $600 billion scale that OpenAI is targeting.

All of this to deliver what Zuckerberg calls "personal superintelligence" - an AI assistant for each of Meta's 3.3 billion users. Whether that requires 6 gigawatts of AMD compute and millions of NVIDIA GPUs and custom MTIA silicon is a question that will be answered by the revenue those users generate.

Bridgewater Associates warned last week that the AI infrastructure boom is entering a "dangerous phase" where spending is outpacing proven revenue models. Meta's response, apparently, is to spend faster.

AMD stock was up 14% in pre-market trading.

Sources

Meta Strikes $100 Billion AI Chip Deal With AMD, Takes Path to 10% Stake
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.