Thinking Machines Lands Gigawatt Compute from Nvidia
Nvidia commits a gigawatt of Vera Rubin chips to Mira Murati's startup, a supply the FT values at tens of billions of dollars, alongside an undisclosed cash investment.

Nvidia has made a "significant" cash investment in Thinking Machines Lab and committed at least 1 gigawatt of its next-generation Vera Rubin systems to the company - a compute supply the Financial Times estimates is worth tens of billions of dollars. Jensen Huang has publicly said that 1 gigawatt of AI data center capacity costs up to $50 billion.
The deal, announced March 10, positions Mira Murati's 14-month-old startup alongside the hyperscalers for raw compute access. Deployment begins in early 2027.
TL;DR
- Nvidia makes undisclosed "significant" cash investment in Thinking Machines Lab
- At least 1 gigawatt of Vera Rubin chips committed, deployment early 2027 - valued by FT at "tens of billions"
- Thinking Machines has now raised over $2 billion total, valued at $12 billion after its July 2025 seed round
- Deal includes technical collaboration to optimize Thinking Machines' products for Nvidia architectures
- Murati previously turned down a Meta acquisition offer
The Numbers
Thinking Machines Lab raised its $2 billion seed round in July 2025 at a $12 billion valuation - one of the largest first rounds in Silicon Valley history. At the time of this announcement, the company employs roughly 120 people. Reports from late 2025 suggested Murati held talks to raise a new round at a $50 billion valuation, though no such round has closed publicly.
| Metric | Value |
|---|---|
| Total funding raised | $2B+ |
| Seed round valuation (July 2025) | $12B |
| Compute committed by Nvidia | 1 gigawatt (Vera Rubin) |
| Estimated chip supply value (FT) | Tens of billions of dollars |
| Deployment timeline | Early 2027 |
| Current headcount | ~120 employees |
| Cash investment amount | Not disclosed |
The investor list now includes Andreessen Horowitz, Accel, ServiceNow, Cisco, Jane Street, and Nvidia - with the unusual addition of AMD Ventures, the VC arm of Nvidia's direct chip competitor.
Who Benefits
Thinking Machines Lab
A gigawatt of compute is a threshold that only hyperscalers - Google, Microsoft, Amazon, Meta - have approached. Securing it at 14 months old lets Thinking Machines position itself as a genuine frontier model builder rather than another fine-tuning wrapper.
The deal also locks in preferential access to Vera Rubin, Nvidia's next-generation architecture, before the broader market can buy in. For a company whose stated mission is building AI that's "more widely understood, customizable and generally capable," the ability to train from scratch changes the product roadmap completely.
Jensen Huang (Nvidia) and Mira Murati (Thinking Machines Lab) at the announcement of their partnership. The deal gives Murati's 14-month-old startup access to compute at hyperscaler scale.
Source: blogs.nvidia.com
Nvidia
For Nvidia, the investment is a hedge against customer concentration. Its current revenue is overwhelmingly driven by a handful of hyperscalers. Backing emerging frontier labs - as it did with the $100 billion OpenAI deal that later collapsed - is how Nvidia ensures that wherever frontier training concentrates, its chips are the substrate.
The technical collaboration embedded in this deal goes further than a straight hardware sale. Thinking Machines engineers will optimize training and serving systems directly for Nvidia architectures, producing optimizations that Nvidia can add into its own software stack. It's a pattern Nvidia has refined across dozens of partnerships: customers who build on your hardware make your hardware better.
This deal also follows Meta's separate multi-billion chip commitment to Nvidia and Meta's 6-gigawatt AMD deal, signaling that compute procurement at this scale is now a standard move for anyone serious about frontier AI.
Who Pays
The rest of the queue
Vera Rubin supply is finite. A gigawatt commitment to Thinking Machines is capacity that can't simultaneously go to someone else. Enterprise buyers, research institutions, and smaller startups who expected to access Vera Rubin systems in 2027 will face tighter allocation. Nvidia manages this through its cloud service provider partners and direct enterprise agreements, but priority deals like this one absorb supply before it reaches the general market.
Nvidia's own investors absorb some risk here too. A strategic investment and compute commitment at these terms assumes Thinking Machines reaches the scale needed to deploy 1 gigawatt of compute profitably. If the company can't attract enough enterprise customers to fill that capacity, Nvidia carries exposure on both the cash and the chip sides.
Murati founded Thinking Machines Lab in February 2025 after leaving OpenAI, where she had served as Chief Technology Officer.
Source: techcrunch.com
The competitive field
The deal raises the bar for every other frontier lab. Yann LeCun's AMI Labs raised $1 billion in March 2026 to pursue a different architectural path. With Thinking Machines now holding gigawatt-scale compute, the gap between well-funded frontier labs and everyone else is wider than it was last week.
OpenAI remains the benchmark. Its relationship with Microsoft provides compute access measured in gigawatts across Azure. Thinking Machines now operates at a comparable scale - on its own terms, without a cloud giant holding the contract.
What the Market Is Missing
The structure of this deal rewards attention. Nvidia isn't simply selling chips. It is making a cash investment with the compute commitment, which means it participates in Thinking Machines' upside directly. That's a different posture than a supply agreement.
Murati turned down a Meta acquisition offer before founding the company. She now has Nvidia's cash and chips without ceding control to a larger acquirer. The AMD Ventures investment sitting with Nvidia's is another detail worth noting - it suggests Thinking Machines is playing chip suppliers against each other for leverage, a strategy that gets harder to run at scale but evidently worked at seed stage.
The company's current product, Tinker, is a cloud service for fine-tuning open-source models using LoRA. A gigawatt of compute doesn't serve a fine-tuning product. It serves training from scratch. The compute commitment uncovers the actual roadmap more clearly than any press release.
"NVIDIA's technology is the foundation on which the entire field is built. This partnership accelerates our capacity to build AI that people can shape and make their own," said Murati in a statement.
Jensen Huang framed the deal in grander terms: "AI is the most powerful knowledge discovery instrument in human history."
Nvidia has effectively bet that Thinking Machines Lab will be one of the three or four AI companies that matter in 2027 - and locked in the infrastructure relationship before anyone else could.
Sources: NVIDIA Blog · TechCrunch · SiliconAngle
