AI Chip Startups Raise $1.1 Billion in a Single Week - The Nvidia Challengers Are Here
Three AI chip startups - MatX, SambaNova, and Axelera - raised a combined $1.1 billion in one week, signaling an acceleration in the race to break Nvidia's GPU dominance.

Something shifted in the AI chip market this week. Not one, not two, but three startups closed funding rounds totaling $1.1 billion - all within days of each other. MatX pulled in $500 million, SambaNova secured $350 million with Intel at its side, and Dutch chipmaker Axelera AI raised $250 million with BlackRock writing checks.
The timing is not coincidental. As Big Tech companies barrel toward $650 billion in AI infrastructure spending this year, a new class of silicon challengers is betting that Nvidia's dominance has a shelf life.
TL;DR
- Three AI chip startups raised $1.1 billion combined in a single week: MatX ($500M), SambaNova ($350M), Axelera ($250M)
- Each targets a different weakness in Nvidia's GPU empire: LLM-specific silicon, dataflow inference, and edge deployment
- Leopold Aschenbrenner's Situational Awareness fund co-led MatX's round alongside Jane Street
- Intel is pivoting from failed acquisition talks to a partnership strategy with SambaNova
- None of these chips are shipping yet - the real test comes in 2027
The $500 Million Bet on LLM-Specific Silicon
MatX: Rethinking the Chip From Scratch
The largest round of the week belongs to MatX, a startup founded in 2022 by Reiner Pope and Mike Gunter - two former engineers who helped build Google's Tensor Processing Units. Their thesis is simple and audacious: GPUs are general-purpose tools being forced into a specialized job, and a chip designed exclusively for large language models can deliver 10x better performance.
The $500 million Series B was co-led by quantitative trading giant Jane Street and Situational Awareness, the investment fund launched by former OpenAI researcher Leopold Aschenbrenner. Marvell Technology, Spark Capital, and Stripe co-founders Patrick and John Collison also participated.
"You need to match what is in the market on all of maybe five different important aspects" while being "far ahead on at least one."
MatX's answer is the MatX One, an accelerator built around a hybrid memory architecture that combines high-bandwidth memory (HBM) for key-value caches with static random access memory (SRAM) for model weights. The company claims this design can deliver over 2,000 output tokens per second for large 100-layer mixture-of-experts models - the architecture behind most frontier LLMs today.
| Spec | MatX One |
|---|---|
| Architecture | Hybrid HBM + SRAM |
| Target workloads | Pre-training, RL, inference (prefill + decode) |
| Claimed throughput | 2,000+ tokens/sec (100-layer MoE) |
| Manufacturing | TSMC |
| Design completion | 2026 |
| Production start | 2027 |
| Employees | ~100 |
Pope's background is telling. As Efficiency Lead for Google PaLM, he designed and implemented what was at the time the world's fastest LLM inference software and helped conceive the TPU v5e. Gunter was a lead hardware designer on Google's TPU line. They are not guessing about what LLM workloads need - they built the chips that trained some of the most capable models in existence.
Internal testing reportedly shows the MatX One outperforming Nvidia's upcoming Rubin Ultra on efficiency metrics, specifically performance per square millimeter of silicon. The company plans to sell directly to leading AI labs like OpenAI and Anthropic rather than building a broad sales organization.
Why Aschenbrenner's Money Matters
Leopold Aschenbrenner is not just any investor. His Situational Awareness essay became the defining document of AI acceleration in 2024, arguing that artificial superintelligence could arrive by 2027 and that compute would be the primary bottleneck. His fund now puts capital behind that conviction. Co-leading MatX's round is a signal that the AI safety and capabilities community sees custom silicon as critical infrastructure for what comes next.
SambaNova: The Dataflow Alternative Gets Intel's Blessing
$350 Million and a Strategic Pivot
SambaNova's $350 million Series E, led by Vista Equity Partners and Cambium Capital, comes with a partner that matters more than the dollars: Intel.
The backstory makes the partnership more interesting. Intel had previously discussed acquiring SambaNova for roughly $1.6 billion, including debt. Those talks stalled. Instead of walking away, Intel pivoted to a "multi-year collaboration" that integrates its Xeon processors with SambaNova's AI accelerators. Intel Capital also invested in the round.
"The real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud," said SambaNova CEO Rodrigo Liang.
The SN50 Chip
SambaNova's approach is fundamentally different from both Nvidia and MatX. Its chips use a Reconfigurable Dataflow Unit (RDU) architecture - closer in concept to Google's TPUs and AWS Trainium than to GPUs. The new SN50 accelerator claims 2.5x higher 16-bit floating-point performance and 5x higher performance at FP8 over its predecessor, translating to 1.6 and 3.2 petaFLOPS respectively.
| Spec | SN50 |
|---|---|
| Architecture | Reconfigurable Dataflow Unit (RDU) |
| FP16 performance | 1.6 petaFLOPS |
| FP8 performance | 3.2 petaFLOPS |
| Max linked accelerators | 256 |
| Interconnect | Multi-terabit/sec |
| Max model size | 10 trillion parameters |
| Max context | 10 million tokens |
| First customer | SoftBank (Japan DCs) |
The three-tier memory architecture supporting models up to 10 trillion parameters and 10 million context lengths is designed for agentic AI systems - the kind of always-on, multi-step reasoning workloads that are increasingly becoming the standard deployment pattern. SoftBank has already signed up to deploy SN50 accelerators in its Japanese datacenters later this year.
Axelera: Europe's Edge AI Play
$250 Million for the Anti-Nvidia Approach
While MatX and SambaNova are chasing datacenter-scale training and inference, Axelera AI is going after a different market entirely: edge deployment. The Amsterdam-based startup raised $250 million led by Innovation Industries, with BlackRock and Samsung Catalyst Fund participating. Total funding now exceeds $450 million.
Axelera's current Metis chip delivers 214 trillion operations per second while consuming just 10 watts - a power efficiency that GPU-based solutions cannot match. The chip uses digital in-memory computing, collocating storage and processing in SRAM to slash the energy-hungry data movement that makes GPUs inefficient for inference.
The upcoming Europa chip doubles performance to 629 trillion operations per second with 8 AI-optimized cores and 16 CPU cores. In testing, it processes over 13,000 frames per second for computer vision workloads - the kind of throughput needed for industrial AI at the edge.
| Spec | Metis (Current) | Europa (2026) |
|---|---|---|
| Performance | 214 TOPS INT8 | 629 TOPS INT8 |
| Power | ~10W | ~45W |
| Architecture | Digital in-memory computing | Digital in-memory computing + RISC-V |
| Cores | - | 8 AI + 16 CPU |
| Target | Edge inference | Edge-to-datacenter |
| Ship date | Shipping | Before June 2026 |
As one of the few European companies making specialized AI chips, Axelera also carries a strategic dimension. The EU's push for semiconductor sovereignty through programs like DARE gives the company a policy tailwind that its American competitors lack. This is similar to the dynamic we saw when Taalas raised $169 million to challenge Nvidia from a European base.
What This Does Not Tell You
These Chips Do Not Exist Yet (Mostly)
The most important caveat: MatX's chip is still in design. SambaNova's SN50 has not shipped. Only Axelera's current-gen Metis is in customer hands, and Europa is months away. The AI chip graveyard is full of startups that raised hundreds of millions on impressive specs and then ran into the brutal physics of actually manufacturing and scaling custom silicon.
Graphcore raised over $700 million before SoftBank acquired it at a fraction of its peak valuation. Habana Labs, Cerebras, and others have struggled to convert technical differentiation into sustained market share against Nvidia's CUDA ecosystem - the software moat that arguably matters more than the hardware itself.
The CUDA Problem
None of these startups have addressed the elephant in the room: Nvidia's CUDA software stack. Every major AI framework, every training pipeline, every inference library has been optimized for CUDA over a decade. MatX, SambaNova, and Axelera each require developers to learn new toolchains. History suggests this is where hardware challengers go to die.
The Timing Question
Nvidia is not standing still. Its Blackwell architecture is already shipping, and Rubin is on the horizon. As we noted when covering DeepSeek's use of banned Blackwell chips, even companies under export controls will move mountains to get Nvidia hardware. That is the competitive reality these startups must overcome.
$1.1 billion in a single week says something about the market's conviction that Nvidia's monopoly is unsustainable. But conviction and execution are different things. The AI chip market has never lacked ambitious challengers - what it has lacked is challengers who ship, scale, and survive. MatX, SambaNova, and Axelera each have a credible technical thesis. In 2027, we will find out if any of them have a credible business.
Sources:
- AI chip startups soak up $1.1B in VC funding this week - The Register
- Nvidia challenger AI chip startup MatX raised $500M - TechCrunch
- AI Chip Startup MatX Raises $500 Million to Compete With Nvidia - Bloomberg
- Ex-Google chip engineers raise $500M to take on Nvidia with LLM-specific silicon - TFN
- SambaNova steps up its challenge to Nvidia with new chip, $350M funding and a powerful ally in Intel - SiliconANGLE
- Intel partners with AI chip startup SambaNova after acquisition talks reportedly failed - CNBC
- SambaNova raises $350M with Intel backing - The Register
- Edge AI chip startup Axelera AI raises $250M+ funding round - SiliconANGLE
- BlackRock Backs Dutch Chipmaker Axelera AI in $250 Million Round - Bloomberg
