Google Turns to SpaceX for Orbital AI Data Centers

Bloomberg reports Google is in talks with SpaceX to launch its Project Suncatcher satellites - TPU-equipped spacecraft designed to run ML workloads in low Earth orbit.

Google Turns to SpaceX for Orbital AI Data Centers

Bloomberg reported this morning that Google is in talks with SpaceX about using Starship to launch the satellite clusters at the heart of Project Suncatcher - a year-long research effort to move AI compute into low Earth orbit. Neither company has confirmed details publicly, but the report signals that Google is moving from paper architecture to actual launch provider selection.

TL;DR

  • Bloomberg: Google is in talks with SpaceX and other launch providers about orbital AI data centers
  • Project Suncatcher proposes 81-satellite clusters at 650 km altitude carrying Trillium v6e TPUs
  • Bench tests show 1.6 Tbps inter-satellite optical links; TPU radiation tolerance tested at 3x the five-year mission dose
  • Two prototype satellites launching with Planet Labs by early 2027 - the real test starts then
  • Starcloud and Aetherflux are already building competing orbital compute hardware

What Project Suncatcher Actually Is

Google unveiled Project Suncatcher in November 2025, publishing a preprint that laid out the hardware architecture for a constellation of solar-powered satellites carrying Trillium v6e TPUs and connected by free-space optical links. The reference design isn't a concept sketch. It models specific orbital configurations, tested specific hardware under radiation, and names Planet Labs as an early mission partner.

The reference architecture calls for 81-satellite clusters flying in tight formation at roughly 650 km altitude in a dawn-dusk sun-synchronous orbit. That orbital choice is deliberate: satellites on that path see near-constant sunlight, which makes the power math cleaner than any ground-based approach.

Satellite Bus and Power System

At 650 km in a sun-synchronous orbit, a satellite receives roughly eight times more solar energy than an equivalent panel array on Earth's surface. Ground-based solar farms lose capacity at night and in bad weather. An orbiting satellite doesn't have either problem.

The power budget scales directly with array size. No combustion, no grid dependency, no brownout risk during heat waves when data center demand peaks.

The Compute Layer: Trillium TPUs in Orbit

Radiation-hardened computer chips under electron microscope Radiation testing of processor hardware is a prerequisite for any orbital compute mission. Source: unsplash.com

Google's Trillium v6e is the sixth generation of its custom TPU line, designed for large-scale transformer training and inference. To confirm it for space use, the team ran it through a 67 MeV proton beam test. The High Bandwidth Memory subsystems showed their first irregularities at a cumulative dose of 2 krad(Si) - nearly three times the expected five-year radiation dose for a mission at that altitude. No hard failures appeared up to 15 krad(Si) on a single chip.

That margin matters. Space hardware needs safety factors. Passing at 3x the mission dose means the team has room to revise the shielding design without going back to the tape-out stage.

This is the piece that makes the architecture viable. Traditional RF communication between satellites is bandwidth-limited. Google's bench tests demonstrated 1.6 Tbps bidirectional throughput using a single free-space optical transceiver pair, hit through dense wavelength-division multiplexing.

In formation-flying clusters at distances of hundreds of meters, satellites can maintain stable optical locks. Google used Hill-Clohessy-Wiltshire equations to model cluster orbital dynamics and showed that 81 satellites spaced within a 1 km radius cluster can hold formation with limited station-keeping burns. That's the theory - the Planet Labs mission in 2027 will be the first real test of optical lock stability in actual orbit.

Thermal Management

Space is cold, but it's also a vacuum. There's no air to carry heat away from chips. Google's architecture relies on heat pipes and radiators to conduct heat from the TPUs to panels that radiate it into space. The detailed thermal design isn't fully public, but the underlying engineering is mature - the ISS and generations of communications satellites use the same approach.

The closed-form summary of the system:

Project Suncatcher reference cluster:
  Satellites: 81
  Formation radius: ~1 km
  Orbit: Sun-synchronous, 650 km altitude
  Inter-satellite links: Free-space optical (FSO)
  Link throughput: 1.6 Tbps per transceiver pair (bench tested)
  TPU: Trillium v6e (radiation tested)
  HBM irregularity threshold: 2 krad(Si) (~3x five-year mission dose)
  Power: Continuous solar, ~8x ground efficiency

The Viability Table

RequirementThresholdCurrent StateStatus
Launch cost to LEO~$200/kg~$1,000-2,000/kgGap: 5-10x
TPU radiation tolerance750 rad(Si) (5-yr)2,000 rad(Si) HBM thresholdPassed
Inter-satellite bandwidthTbps-class1.6 Tbps bench testedUnverified in orbit
Formation flying stabilityStation-keeping viableModeled via HCW equationsUnverified in flight
Thermal managementContinuous operationStandard radiative coolingAchievable

The chip and the optical link clear the technical bar on paper. Launch economics are the wall.


The Competitive Stack

Google isn't alone.

Starcloud, a Y Combinator-backed startup from Seattle, launched its first satellite carrying an NVIDIA H100 GPU in November 2025 and claims it was the first AI model trained in orbit. In March 2026, the company raised a $170 million Series A at a $1.1 billion valuation - the fastest YC company to reach unicorn status - led by Benchmark and EQT Ventures. Starcloud's roadmap includes a 200 kW, three-ton Starcloud-3 spacecraft designed to fit SpaceX's Starship "PEZ dispenser" satellite deployment system.

A satellite with deployed solar panels in orbit above Earth Starcloud's Starcloud-1, carrying an NVIDIA H100 GPU, was the first orbital satellite to run an AI training workload. Source: unsplash.com

Aetherflux and Aethero are also active in the space. SpaceX itself filed with the FCC for authorization to deploy up to one million data center satellites - a plan that folded into the SpaceX-xAI merger and billion-satellite FCC filing announced earlier this year.

The Bloomberg report notes Google is in talks with multiple launch providers, not just SpaceX. That's standard practice for a hardware program now: you don't sign launch contracts before the payload design is finalized.


Where It Falls Short

The launch cost math doesn't work yet. Google's own research puts the economic break-even at roughly $200 per kilogram to LEO. SpaceX Starship's current operational cost sits significantly higher than that, and the $10-50/kg target Musk has cited for a fully reusable Starship doesn't have a confirmed timeline. Google's paper projects that $200/kg is achievable "in the mid-2030s" if Starship reaches 180 launches per year. Neither number is locked.

Latency is the second constraint. LEO satellites add 5-20 ms of round-trip latency compared to ground infrastructure. For batch ML training workloads sharded across an orbital cluster without real-time ground feedback, that's acceptable. For inference pipelines where a user is waiting for a response, it's a harder engineering problem.

Orbital debris risk is the third constraint and the one the industry doesn't talk about enough. Adding thousands of compute satellites to already-crowded LEO orbits raises collision probability across every operator. Google published a debris mitigation analysis in January 2026 that addressed deorbit planning, but the broader regulatory and insurance frameworks for high-density orbital data center constellations don't exist yet.

The Planet Labs partnership remains a learning mission. Two prototype satellites launching in early 2027 don't constitute a compute cluster. They verify orbital dynamics, optical link stability, and TPU performance under real radiation - all prerequisites before any production design gets locked.


The SpaceX talks move Project Suncatcher from research paper to procurement conversation. That's a different class of signal than a preprint.

Sources:

Sophie Zhang
About the author AI Infrastructure & Open Source Reporter

Sophie is a journalist and former systems engineer who covers AI infrastructure, open-source models, and the developer tooling ecosystem.