
US AI Labs Share Intel to Stop Chinese Model Theft
OpenAI, Anthropic, Google, and Microsoft are now sharing attack detection data through the Frontier Model Forum to collectively block Chinese adversarial distillation campaigns.

OpenAI, Anthropic, Google, and Microsoft are now sharing attack detection data through the Frontier Model Forum to collectively block Chinese adversarial distillation campaigns.

A community fine-tune distills Claude Opus 4.6 chain-of-thought reasoning into Qwen3.5-27B via LoRA, racking up 4,000+ downloads in days. No benchmarks yet - but the approach raises familiar questions.

Community fine-tune that distills Claude Opus 4.6 reasoning into Qwen3.5-27B via LoRA. 28B parameters, Apache 2.0, no published benchmarks.

Comparing the Claude Opus reasoning-distilled Qwen3.5-27B against the base model - what chain-of-thought distillation adds and what it costs in context, multimodal, and reliability.

Claude Sonnet 4.6 identifies itself as DeepSeek when prompted in Chinese, just one day after Anthropic accused DeepSeek of industrial-scale distillation attacks. The cause is training data contamination, not an identity crisis - but the timing is spectacular.

Anthropic accuses three Chinese AI labs of industrial-scale distillation attacks using 24,000 fraudulent accounts and 16 million exchanges with Claude. MiniMax ran the largest operation at 13 million exchanges. None of the three companies have responded.