Broadcom AI Revenue Doubles to $8.4B, Eyes $100B in 2027
Broadcom's Q1 2026 earnings show AI chip revenue up 106% year-over-year, with Anthropic, OpenAI, and Meta driving demand toward a $100 billion custom silicon forecast for 2027.

Broadcom's CEO Hock Tan said on Wednesday night that the company has "line of sight to achieve AI revenue from chips, just chips, in excess of $100 billion in 2027" - a number that would have seemed absurd two years ago and still stops you mid-sentence today.
TL;DR
- $8.4 billion in Q1 AI semiconductor revenue, up 106% year-over-year
- $10.7 billion in AI revenue forecast for Q2 2026, matching the highest analyst estimate
- $100 billion+ in AI chip revenue projected for all of 2027
- $21 billion committed by Anthropic for Broadcom-built TPU racks through 2027
- 10 gigawatts of custom accelerators being co-developed with OpenAI for deployment by end of 2029
The company reported fiscal Q1 2026 total revenue of $19.31 billion, up 29% year-over-year, beating estimates of $19.18 billion. Adjusted EPS came in at $2.05, above the $2.03 consensus. But the headline numbers were a sideshow. The real news was the customer-by-customer demand picture Tan painted on the earnings call - one that names names, cites gigawatts, and makes Nvidia's dominance in AI infrastructure look, for the first time, truly contested on the custom silicon front.
The Core Dataset
| Metric | Q1 FY2026 | Q1 FY2025 | Change |
|---|---|---|---|
| Total revenue | $19.31B | $14.92B | +29% |
| AI semiconductor revenue | $8.4B | ~$4.1B | +106% |
| Total semiconductor revenue | $12.5B | $8.2B | +53% |
| Adjusted EPS | $2.05 | $1.60 | +28% |
| Operating cash flow | $8.26B | - | - |
| Free cash flow | $8.01B | - | - |
| Q2 AI revenue guidance | $10.7B | - | - |
| Q2 total revenue guidance | ~$22.0B | - | - |
| 2027 AI chip forecast | $100B+ | - | - |
The board also approved a $10 billion share buyback program through December 2026.
What the Numbers Say
Custom Silicon Is No Longer a Niche
For the past three years, custom AI accelerators - chips designed specifically for a single customer's workload rather than sold off the shelf - were described as a coming threat to Nvidia. The Q1 numbers suggest "coming" has arrived.
Broadcom's AI revenue is almost completely driven by custom application-specific integrated circuits (ASICs) and the networking silicon that connects them. The 106% growth rate isn't a one-quarter outlier. It's the compounding result of multi-year contracts with hyperscalers that have been placing increasingly large orders as they migrate workloads from general-purpose GPUs to purpose-built silicon.
Google's Ironwood (TPU v7), photographed at SC25. Broadcom builds and assembles the Ironwood racks Anthropic is buying at scale.
The Customer List Is Now Public
Tan has historically been coy about who, exactly, is driving Broadcom's AI business. Not this quarter. He named the customers and put gigawatt commitments next to each name:
- Anthropic: Broadcom is currently deploying one gigawatt of Google TPU Ironwood compute for the AI lab in 2026. That demand is expected to surge to "in excess of 3 gigawatts" in 2027. Total committed spend from Anthropic across two orders has reached $21 billion. This follows the December revelation - when Broadcom confirmed Anthropic as its previously unnamed $10 billion customer.
- OpenAI: A formal strategic collaboration announced jointly with Broadcom will deliver 10 gigawatts of OpenAI-designed custom accelerators by end of 2029, with volume deployment of the first-generation chip beginning in 2027 at over one gigawatt. Sam Altman called the partnership part of "the broader ecosystem of partners all building the capacity required to push the frontier of AI."
- Meta: MTIA custom accelerators are "alive and well" and shipping now. Multiple gigawatts expected in 2027 and beyond - a parallel track to the company's much-discussed deal with AMD for $100 billion in chip procurement.
- Google: Continuing strong demand for next-generation TPUs.
That is four of the five or six largest AI compute buyers in the world committing to custom silicon at scale, routed through one supplier.
Supply Chain Is Locked
One question that has followed Tan's big forecasts for the past year is whether Broadcom can actually deliver the volume needed to hit $100 billion. His answer Wednesday was unusually specific: "We have fully secured capacity of these components for 2026 through 2028."
That matters because custom AI silicon involves more than chip design. It requires advanced packaging (CoWoS or SoIC), high-bandwidth memory (HBM3E), optical networking, and the ability to assemble complete rack-level systems at scale. Broadcom has positioned itself as the integrator of all of those layers. Tan's supply chain comment was targeted directly at investors who remember the Nvidia allocation bottlenecks of 2023 and 2024.
Gigawatt-scale AI deployments require not just chips but fully integrated rack systems - which Broadcom now delivers end-to-end.
What the Numbers Don't Say
The Competitive Threat Is Self-Serving
Tan's argument that "we will not see competition in COT [chip-on-wafer technology] for many years to come" is also the argument of a CEO whose stock went up 5% during his earnings call. The barriers to entry he cited - yield management, supply chain scale, silicon design talent - are real, but the framing glosses over a few things.
Nvidia isn't standing still. Its Grace Blackwell architecture and the forthcoming Rubin platform are specifically designed to retain customers who might otherwise migrate to custom silicon. The economics of moving to custom ASICs only favor a customer once their workloads are sufficiently standardized and their volumes are high enough to justify the upfront engineering costs. Not every AI company will reach that threshold.
The $21 billion Anthropic commitment also reflects Google's role as the chip designer (Ironwood is a Google TPU), with Broadcom serving primarily as the manufacturing and integration partner. The underlying intellectual property for those TPUs sits in Mountain View, not San Jose.
There's also no independent verification of the 2027 forecast. The $100 billion figure comes from Broadcom's own modeling of contracts it has signed or is in late-stage negotiations to sign. Bernstein analyst Stacy Rasgon raised his price target to $525 and said EPS could approach $20 next year, but that's a buy-side endorsement, not a third-party audit.
"Today, in fact, we have line of sight to achieve AI revenue from chips, just chips, in excess of $100 billion in 2027," CEO Hock Tan said on the earnings call Wednesday. "We have also secured the supply chain required to achieve this."
So What?
Broadcom's Q1 report is the clearest evidence yet that the AI infrastructure boom has a second act beyond Nvidia. Custom silicon was always theoretically possible at scale; Broadcom is now making it real with named customers and confirmed supply chains. For anyone tracking where the money in AI actually goes - and how quickly the infrastructure layer is being locked up by a handful of incumbents - the gigawatt commitments disclosed this week are the most consequential numbers in the space since Nvidia's $100 billion OpenAI deal collapsed. That collapse, and what replaced it, is precisely the context Tan was addressing. The customers did not stop spending; they redirected. The question for 2026 is whether that redirection is permanent or a hedge.
Sources:
- Broadcom Inc. Announces First Quarter Fiscal Year 2026 Financial Results
- OpenAI and Broadcom Announce Strategic Collaboration to Deploy 10 Gigawatts of AI Accelerators
- Broadcom Q1 2026 Earnings Call Transcript
- Broadcom Says AI Companies Can't Make Their Own Silicon
- Broadcom Beats Q1 Earnings As AI Revenue More Than Doubles
- Broadcom Posts Modest Q1 Sales and Earnings Beats
