Anthropic Says DeepSeek, Moonshot, and MiniMax Ran 24,000 Fake Accounts to Steal Claude's Capabilities
Anthropic accuses three Chinese AI labs of industrial-scale distillation attacks using 24,000 fraudulent accounts and 16 million exchanges with Claude. MiniMax ran the largest operation at 13 million exchanges. None of the three companies have responded.

Anthropic published a detailed report on February 23 accusing three Chinese AI laboratories -- DeepSeek, Moonshot AI, and MiniMax -- of conducting industrial-scale campaigns to extract Claude's capabilities through model distillation. The three labs collectively created approximately 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, harvesting its reasoning patterns, tool use behavior, and chain-of-thought processes to train their own models.
We've identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax—three Chinese AI labs that used ~24,000 fake accounts and over 16 million exchanges with Claude to extract its capabilities for training their own systems.
— Anthropic (@AnthropicAI) February 23, 2026
Jacob Klein, Anthropic's Head of Threat Intelligence, told Fox News: "We have high confidence these labs were conducting distillation attacks at scale." He added that the capability gains were "meaningful" and "substantial," but acknowledged: "There isn't an immediate silver bullet to stop all of these."
None of the three accused companies have issued a public response.
TL;DR
| Lab | Exchanges | Focus |
|---|---|---|
| DeepSeek | 150,000+ | Reasoning, chain-of-thought extraction, censorship-safe alternatives |
| Moonshot AI (Kimi) | 3.4 million+ | Agentic reasoning, tool use, computer vision |
| MiniMax (Hailuo AI) | 13 million+ | Agentic coding, tool use |
| Total | 16 million+ | 24,000 fraudulent accounts |
What happened
Model distillation is a technique where a weaker model is trained on the outputs of a stronger one, extracting its capabilities "in a fraction of the time, and at a fraction of the cost" compared to independent development. It is a widely used and legitimate training method -- frontier labs use it internally to create cheaper versions of their own systems. What makes this case different is the scale, the coordination, and the fact that Anthropic does not offer commercial access to Claude in China.
DeepSeek: 150,000+ exchanges
DeepSeek generated over 150,000 exchanges focused on reasoning capabilities and rubric-based grading tasks -- the kind of data used to train reward models. Prompts asked Claude to "imagine and articulate the internal reasoning behind a completed response and write it out step by step," effectively generating chain-of-thought training data at scale.
A notable detail: some prompts directed Claude to generate alternatives to politically sensitive queries about "dissidents, party leaders, or authoritarianism" -- using an American model to produce censorship-safe responses for content that Chinese models are required to refuse.
DeepSeek used synchronized traffic with load balancing across accounts, with identical patterns, shared payment methods, and coordinated timing.
Moonshot AI: 3.4 million+ exchanges
Moonshot AI, the company behind the Kimi series of chatbots, conducted over 3.4 million exchanges targeting agentic reasoning, tool use, coding, data analysis, and computer vision. Later phases specifically attempted to extract and reconstruct Claude's reasoning traces.
Moonshot employed hundreds of fraudulent accounts spanning multiple access pathways. Anthropic attributed the campaign through request metadata which matched the public profiles of senior Moonshot staff.
MiniMax: 13 million+ exchanges
MiniMax, the company behind Hailuo AI, ran the largest operation by far, generating over 13 million exchanges focused on agentic coding and tool use.
The most striking detail: when Anthropic released a new model during MiniMax's active campaign, MiniMax pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from the latest system. Anthropic detected this campaign while it was still active -- before MiniMax released the model it was training -- giving Anthropic what it described as "unprecedented visibility into the distillation attack lifecycle."
How 24,000 fake accounts work
The accounts were not created individually. The operations relied on commercial proxy services -- also called "mirror sites" -- that resell access to Claude and other frontier APIs at scale. These services procure access using phone numbers and payment methods from supported regions, then resell to users in China.
One proxy network managed 20,000+ fraudulent accounts simultaneously in what Anthropic calls a "hydra cluster" architecture, distributing traffic across Anthropic's API and third-party cloud platforms. Additional accounts exploited educational account pathways, security research programs, and startup organization registrations.
Anthropic detected the campaigns through:
- IP address correlation and infrastructure indicators
- Request metadata analysis (in Moonshot's case, matching senior staff profiles)
- Behavioral fingerprinting -- the volume, structure, and focus of prompts were distinct from normal usage
- Chain-of-thought elicitation detection
- Coordinated activity patterns across large numbers of accounts
- Corroboration from industry partners
This is not just Anthropic's problem
Anthropic's disclosure follows identical accusations from the other two major U.S. frontier labs:
OpenAI sent a memo to the House Select Committee on the CCP on February 12 alleging that DeepSeek systematically "stole" its intellectual property through large-scale distillation. OpenAI observed "accounts associated with DeepSeek employees developing methods to circumvent OpenAI's access restrictions and access models through obfuscated third-party routers." The memo stated China's distillation methods have moved beyond basic chain-of-thought extraction to multi-stage operations including synthetic data generation and large-scale data cleaning.
Google's Threat Intelligence Group reported distillation attacks on Gemini on the same day. One campaign prompted Gemini more than 100,000 times before Google identified what was happening. The prompts attempted to coerce Gemini into outputting full reasoning processes across a wide variety of tasks in non-English languages. Google attributed some activity to both private-sector and state-aligned actors.
As The Register noted, "public-facing AI models are widely accessible, and enforcement against abusive accounts can turn into a game of whack-a-mole." John Hultquist of Google stated: "Your model is really valuable IP, and if you can distill the logic behind it, there's very real potential that you can replicate that technology -- which is not inexpensive."
The national security framing
Anthropic explicitly frames distillation as a national security issue, arguing that foreign labs that distill American models can strip safety guardrails and "feed these unprotected capabilities into military, intelligence, and surveillance systems -- enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance."
Representative John Moolenaar, chair of the House Select Committee on the CCP, stated: "This is part of the CCP's playbook: steal, copy, and kill."
The House Select Committee previously published a report finding that DeepSeek funnels Americans' data to the PRC through backend infrastructure connected to a U.S. government-designated Chinese military company.
Dario Amodei has argued that export controls are "one of our most powerful tools" and called selling advanced AI chips to China a "major mistake." But current export controls focus on limiting chip access and direct model weight transfers. Distillation exploits a third vector that existing policy does not adequately address: extracting capabilities through API access, one prompt at a time.
The enforcement problem
The fundamental tension is that model distillation is trivially accessible. A group of college students recently replicated the basic technique for $52. The difference between that and what DeepSeek, Moonshot, and MiniMax did is scale -- 24,000 accounts and 16 million exchanges versus a handful of API calls -- but the underlying mechanism is identical: send prompts, collect responses, finetune a student model on the outputs.
Anthropic has built classifiers and behavioral fingerprinting systems, strengthened verification for commonly exploited account pathways, and is sharing technical indicators with other labs and cloud providers. But as Klein acknowledged, there is no silver bullet. Proxy networks can always create new accounts. API access, by design, means giving the model's outputs to whoever asks.
The Diffusion Rule published in January 2025 established a three-tier export control framework for AI, with China in the most restricted tier. Anthropic updated its terms in September 2025 to block entities more than 50% owned by companies headquartered in unsupported regions. In January 2026, it deployed technical safeguards blocking third-party tools from spoofing the official Claude Code client.
None of it stopped 16 million exchanges.
The three accused companies -- DeepSeek, Moonshot AI, and MiniMax -- have not responded. Fox News Digital reached out to all three and received no reply. Whether this disclosure triggers policy changes, legal action, or simply another round of whack-a-mole depends on whether governments treat API-based distillation as seriously as chip exports and model weight transfers. So far, they have not.
Sources:
- Detecting and preventing distillation attacks - Anthropic
- Top AI firm alleges Chinese labs used 24K fake accounts - Fox News
- Anthropic accuses Chinese AI labs of stealing data from Claude - Investing.com
- Anthropic says DeepSeek, Moonshot, MiniMax created 24,000 fake accounts - OfficeChai
- OpenAI alleges China's DeepSeek stole its IP - FDD
- Google Gemini hit with 100,000 prompt cloning attempt - NBC News
- Distillation, experimentation, and integration of AI - Google Cloud Blog
- AI risk: distillation attacks - The Register
- OpenAI-DeepSeek distillation dispute - Rest of World
- DeepSeek Unmasked - House Select Committee on the CCP
- On DeepSeek and export controls - Dario Amodei
- Anthropic regional restrictions
- Anthropic position on the Diffusion Rule
- Anthropic clarifies third-party client ban - The Register
