Amazon Bets $25B on Anthropic and 5GW of Trainium

Amazon will invest up to $25 billion more in Anthropic, with Anthropic committing to spend over $100 billion on AWS over the next decade, cementing Trainium as Claude's primary compute platform.

Amazon Bets $25B on Anthropic and 5GW of Trainium

Amazon announced today it'll invest up to $25 billion in Anthropic, the biggest single bet the company has placed on any AI lab. The deal pairs $5 billion in immediate cash with up to $20 billion more tied to commercial milestones - and in exchange, Anthropic has committed to spend more than $100 billion on Amazon Web Services over the next ten years.

That $100 billion is the headline that matters more than the investment itself.

TL;DR

  • Amazon invests up to $25B in Anthropic - $5B now, $20B milestone-gated
  • Anthropic commits $100B+ spend on AWS over 10 years, up to 5 gigawatts of Trainium capacity
  • Trainium2, 3, 4 and future chip generations locked in as Claude's compute backbone
  • AWS customers get direct Claude Platform access through existing accounts, no separate credentials
  • Previous Amazon investment totaled $8B; new deal could bring Amazon's total stake to $33B

What $25 Billion Actually Buys

The Investment Structure

Amazon's previous $8 billion in Anthropic came in two tranches: an initial $1.25 billion, then another $2.75 billion, then an additional $4 billion last year. The new $25 billion is structured differently. The immediate $5 billion is unconditional. The remaining $20 billion unlocks as Anthropic hits specific commercial targets, which Amazon hasn't disclosed publicly.

Andy Jassy, Amazon's CEO, put it plainly: "Anthropic's commitment to run its large language models on AWS Trainium for the next decade reflects the progress we've made together on custom silicon."

That's a notable framing. Jassy isn't talking about Claude's intelligence. He's talking about chips.

The AWS Commitment

The $100 billion spend pledge is almost certainly the real anchor of this deal. At that scale, Anthropic isn't a customer - it's a co-investor in AWS infrastructure. The agreement secures up to 5 gigawatts of Trainium capacity, spanning Trainium2 (already launched at scale), Trainium3 (shipping since early 2026), Trainium4 (in development), and whatever comes after that.

To put 5 gigawatts in context: a modern hyperscale data center runs between 100 and 200 megawatts. This commitment covers capacity equivalent to 25 to 50 of them dedicated to Claude training and inference alone.

Dario Amodei, Anthropic's CEO, described the arrangement as enabling the company "to continue advancing AI research while delivering Claude to our customers, including the more than 100,000 building on AWS."

The Trainium Stack

Close-up of AI accelerator chip hardware on a circuit board AI accelerator hardware central to Amazon's Trainium chip line, designed by Annapurna Labs. Source: pexels.com

What Trainium3 Brings

Trainium3, built on TSMC's 3nm process, delivers roughly 4.4 times the compute performance and 4 times the energy efficiency of Trainium2. Amazon claims Trn3 UltraServers - 144-chip rack-scale systems - cost up to 50% less to run than "classic cloud servers" at comparable performance.

The raw numbers against NVIDIA's GB200 are less flattering. Semianalysis published a detailed comparison showing Trainium2 trails the GB200 by 3.85x on FP16 FLOPs (667 vs. 2,500 TFLOP/s) and 2.75x on memory bandwidth. Trainium3 closes some of that gap, but NVIDIA still leads on pure throughput.

Where Trainium wins is cost-per-token. Anthropic's reinforcement learning workloads are memory-bandwidth-bound more than compute-bound, which makes Trainium's economics competitive even when raw FLOPs aren't. That's not a coincidence - Anthropic's engineers have been writing low-level kernels directly against the Trainium silicon since the Project Rainier partnership began.

The Infrastructure Scale

Project Rainier, the existing Anthropic cluster, launched in late 2025 with 500,000 Trainium2 chips and over 1 million total chips now running Claude on Amazon Bedrock. AWS is constructing more than 1.3 gigawatts of IT capacity across three campuses for Anthropic's training needs, with additional multi-gigawatt facilities breaking ground.

Trainium4 is expected in late 2026 or early 2027 and will support NVIDIA's NVLink Fusion interconnect, meaning Trainium4 systems can interoperate with NVIDIA GPUs in the same cluster. That's a significant engineering shift - it gives Anthropic flexibility to mix hardware depending on workload type.

Customer Integration

Server rack room with rows of illuminated network equipment Amazon Web Services data center infrastructure hosting Claude via Amazon Bedrock. Source: pexels.com

The deal changes how enterprise customers access Claude. Under the new structure, AWS customers can provision Claude Platform directly through their existing AWS accounts - no separate Anthropic credentials, no separate billing relationship.

Access pathBeforeAfter
Claude on BedrockAvailable (model API only)Available
Claude PlatformSeparate Anthropic accountIntegrated with AWS account
Access controlsAWS IAM + Anthropic-sideUnified AWS IAM
GovCloud supportLimitedFull GovCloud, Secret/TS regions
BillingSplit invoicesSingle AWS invoice

For enterprises already running workloads on AWS, this removes the last reason to manage a separate Anthropic vendor relationship. That's useful. For Anthropic, it routes more revenue through AWS, which helps Anthropic hit the commercial milestones that unlock the remaining $20 billion.

The Anthropic revenue picture has been strong: the company's run-rate surpassed $30 billion earlier this year, up from $9 billion at end-2025. The AWS deal aims to keep that arc steep.

Where It Falls Short

Three things this deal doesn't resolve.

First, Trainium still isn't NVIDIA. The raw compute gap is real, and it matters most for the largest training runs. Anthropic may be locked into Trainium for most workloads while still needing NVIDIA for frontier model training at the edge of what's possible. The deal commits to Trainium capacity; it doesn't prohibit using NVIDIA elsewhere.

Second, the milestone-gated $20 billion creates dependency in both directions. If Anthropic's commercial growth slows, Amazon may not deploy the full commitment. Anthropic is now structurally incentivized to grow its AWS-routed revenue, which shapes product decisions in ways that may not align with what's best for researchers or for customers using Claude through other cloud providers.

Third, the competitive picture is complicated. Amazon already committed $50 billion to OpenAI in March, making AWS the exclusive third-party distributor for OpenAI Frontier. Now it's Anthropic's largest infrastructure partner. Betting $75 billion across two rival AI labs isn't exactly a principled technology strategy - it's an infrastructure land-grab that ensures AWS wins regardless of which model family emerges as the leader.


For now, the deal locks in Trainium3 as the workhorse for Claude inference and training through 2027 at minimum, with Trainium4's NVLink Fusion support opening a hybrid path after that. The Anthropic valuation crossed $350 billion earlier this year. Today's announcement suggests Amazon sees that figure as a floor, not a ceiling.

Sources:

Last updated

Amazon Bets $25B on Anthropic and 5GW of Trainium
About the author AI Infrastructure & Open Source Reporter

Sophie is a journalist and former systems engineer who covers AI infrastructure, open-source models, and the developer tooling ecosystem.