US AI Labs Share Intel to Stop Chinese Model Theft

OpenAI, Anthropic, Google, and Microsoft are now sharing attack detection data through the Frontier Model Forum to collectively block Chinese adversarial distillation campaigns.

US AI Labs Share Intel to Stop Chinese Model Theft

Three of America's biggest AI companies are now sharing threat data with each other. OpenAI, Anthropic, and Google - with Microsoft as co-founder - announced on April 6 that they're pooling attack detection intelligence through the Frontier Model Forum, the industry nonprofit all four co-founded in 2023. Their target is adversarial distillation - the systematic extraction of proprietary model capabilities by Chinese AI firms querying U.S. APIs at scale.

The cooperation follows Anthropic's February disclosure that DeepSeek, Moonshot AI, and MiniMax collectively created around 24,000 fraudulent accounts and ran over 16 million exchanges with Claude to copy its reasoning and tool-use behavior. Bloomberg reported the new Frontier Model Forum coordination on April 6, citing sources familiar with the arrangement.

"Massive volume concentrated in a few areas, highly repetitive structures, and content that maps directly onto what is most valuable for training an AI model."

  • Anthropic Threat Intelligence Team, describing the behavioral fingerprint of adversarial distillation

U.S. officials estimate the unauthorized copying costs American AI companies billions of dollars annually in compute and research value. OpenAI and Anthropic together have spent over $18 billion on R&D compute since 2024. Chinese models built on distilled U.S. capabilities reportedly run at roughly one-fourteenth the cost of their U.S. counterparts - a price gap that has driven adoption in markets where cost is the deciding factor.

Stakeholder Impact

StakeholderImpactTimeline
OpenAI, Anthropic, GoogleShared attack signatures; faster joint detectionNow
DeepSeek, Moonshot, MiniMaxAPI bans, tighter verification, possible Entity List designationOngoing
Enterprise API customersStricter new-account verification; no disruption to established usersQ2-Q3 2026
U.S. governmentISAC proposal in AI Action Plan; PAIP Act sanctions path now activeQ3 2026+

Companies

The Frontier Model Forum coordination means the four companies are sharing specific attack signatures - behavioral patterns, account characteristics, and prompt structures associated with distillation campaigns. Previously each lab acted on its own: Anthropic blocked accounts and modified outputs to reduce distillation value; OpenAI submitted evidence to U.S. lawmakers in February 2025. Now they're connecting those independent datasets.

The practical gain is faster detection. When one lab identifies a new evasion technique - such as the "Hydra cluster" architecture, which uses tens of thousands of simultaneously active fraudulent accounts routed through commercial proxy services - that fingerprint can reach the others before the same technique targets them.

Anthropic has already put in place a complete ban on access by Chinese-controlled companies. OpenAI and Google have confirmed stricter rate limiting and account verification protocols, but have stopped short of blanket bans.

Users

For the vast majority of API users, the changes are invisible. Detection focuses on automated, scripted patterns at massive volume - not individual developers or researchers running occasional large queries. Rate limit adjustments and tighter verification will affect new account creation more than established users.

The risk is different for smaller AI providers that lack the resources to run behavioral monitoring at scale. The Institute for AI Policy and Strategy noted in a concurrent policy memo that adversarial distillation campaigns "target the weakest points in the ecosystem first, not the largest ones."

Competitors

This is also competitive positioning. The three U.S. labs cooperating on defense are the same three competing on products and customers. Sharing threat data is not sharing model weights or commercial strategy - but it creates a new category of industry coordination with no clear precedent. The companies have acknowledged "antitrust uncertainties" limit how much competitive information they can legally share. The Frontier Model Forum provides an institutional buffer, but the cooperation sits in a legal gray area until regulators weigh in.

Sam Altman, Sundar Pichai, and Dario Amodei with Prime Minister Modi at the India AI Impact Summit, New Delhi, February 2026 Sam Altman, Sundar Pichai, and Dario Amodei with Prime Minister Narendra Modi at the India AI Impact Summit in New Delhi, February 2026. The three CEOs are now sharing threat data through the Frontier Model Forum. Source: commons.wikimedia.org

What the Attacks Actually Look Like

Model distillation is a legitimate training technique - frontier labs use it internally to build cheaper, faster versions of their own systems. Adversarial distillation differs in authorization and method: outside parties query proprietary APIs with engineered prompts, collect the outputs as training data, then train a new model to copy the behavior.

The campaigns Anthropic documented were not clumsy. MiniMax's 13 million exchanges targeted agentic coding and tool use. Moonshot AI's 3.4 million focused on agentic reasoning and computer vision. According to IAPS analysis, within 24 hours of a new Claude release, attackers pivoted to capture its new capabilities. The accounts used commercial proxy services and third-party API routers to mask their country of origin.

DeepSeek, which released its R1 reasoning model in January 2025 to praise for its performance-to-cost ratio, had 150,000+ Claude exchanges attributed to its associated accounts - with documented attempts to get around safety guardrails via third-party routers. The debate over whether its efficiency reflects genuine engineering or distillation of U.S. capabilities hasn't been resolved, but Anthropic's evidence is specific. DeepSeek has since been blocked from accessing Nvidia's latest chips on national security grounds.

Foreign economic espionage in cyberspace - illustration from the U.S. National Counterintelligence Executive The campaign structure - thousands of automated accounts routing queries through commercial proxies - follows patterns documented in foreign economic espionage cases. Source: commons.wikimedia.org

What Happens Next

The Trump administration's AI Action Plan includes a proposal for a government-run information-sharing and analysis center, partly designed to formalize the type of coordination the Frontier Model Forum is now providing informally. Whether it becomes a funded federal body or stays industry-run is unresolved.

On sanctions, the IAPS memo recommends adding DeepSeek, Moonshot AI, and MiniMax to the Bureau of Industry and Security's Entity List under a presumption of denial - a designation that extends to all entities they own at 50% or more. A parallel path under the Protecting American Intellectual Property Act could trigger asset blocking. The first PAIP Act designations came in February 2026, establishing that the mechanism is usable.

The harder problem is economic. Export controls on advanced chips restrict hardware access, but distillation partially routes around that constraint - a lab can acquire frontier AI reasoning capabilities through API queries rather than raw compute. Blocking accounts addresses the tactic. It doesn't change the fact that copying is cheaper than building, or that the gap between U.S. frontier performance and Chinese alternatives is the economic engine driving the attacks in the first place.

Sources: Anthropic - Detecting and Preventing Distillation Attacks - The Decoder - OpenAI, Anthropic, and Google team up against unauthorized Chinese model copying - IAPS - AI Distillation Attacks: The Case for Targeted Government Intervention - BanklessTimes - OpenAI, Google, Anthropic team up to block Chinese scraping

US AI Labs Share Intel to Stop Chinese Model Theft
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.