OpenAI, Anthropic, and Google Join Forces Against Chinese Model Copying
In a twist that would have seemed impossible a year ago, the three fiercest rivals in AI — OpenAI, Anthropic, and Google — are teaming up. The enemy isn’t each other. It’s Chinese labs copying their models.
Bloomberg reported on April 7 that the three companies have agreed to share intelligence about “distillation attacks” — where rival labs feed prompts to powerful AI models, collect the outputs, and use them to train cheaper knockoffs. They’re coordinating through the Frontier Model Forum, a nonprofit they founded with Microsoft in 2023.
🪤 What’s Distillation?
Distillation isn’t new — it’s a standard AI training technique where you use a larger model’s outputs to train a smaller one. Apple does it openly, reportedly paying Google about $1 billion a year to use Gemini outputs to train a smarter Siri.
The problem is when it’s done without permission. Chinese labs are systematically querying US frontier models through APIs and collecting millions of outputs to build competing models at a fraction of the cost. It’s the AI equivalent of photocopying someone else’s textbook and selling it as your own.
💰 The Financial Damage
US officials estimate that unauthorized distillation costs Silicon Valley labs billions of dollars in annual profit, per Bloomberg.
The damage goes beyond direct revenue loss. When DeepSeek released a major new reasoning model in January 2025 that rivaled US frontier models on key benchmarks — a model likely built on distilled outputs — the shock wiped nearly $1 trillion off US and European tech stocks in a single day.
If a Chinese model is a distilled Claude, why would anyone pay a premium subscription for the original?
🚨 The Safety Angle
The companies argue that distilled models strip out safety guardrails — the protections that prevent anyone, including foreign adversaries, from using AI to create pathogens or launch disinformation campaigns.
This is both a genuine concern and a convenient talking point. When DeepSeek distilled frontier models, it presumably also distilled their safety training. An open-weight model without robust safety guardrails, available for download by anyone, is a different risk profile than a gated API with content filters.
But the financial threat likely looms larger for the companies themselves. Open-source models are commoditizing the very capabilities that justify premium pricing.
📋 The Receipts
The evidence is accumulating:
- January 2025: Microsoft claimed that DeepSeek was extracting large amounts of data through OpenAI’s API
- February 2026: OpenAI told Congress that DeepSeek was trying to “free-ride on the capabilities developed by OpenAI and other US frontier labs”
- February 2026: Anthropic accused three Chinese AI companies of using over 24,000 fake accounts with Claude to generate 16 million exchanges, and traced some accounts to senior staff at those labs
Anthropic’s disclosure was particularly striking — 24,000 fake accounts generating 16 million exchanges is industrial-scale distillation, not casual use.
⚖️ The Legal Problem
Here’s the catch: AI outputs can’t be copyrighted under US law. The companies are treating distillation as a terms-of-service violation, but that’s a contract issue, not a property right.
The strongest path for recourse is political. The Trump administration’s AI Action Plan already called for an industry information-sharing center to fight distillation. But the labs have a catch-22: when some of the biggest AI firms start trading intelligence about competitors, it can look like collusion — even if the point is to stop someone else from copying their homework.
🔍 The Bottom Line
Three companies that spend billions competing against each other are now cooperating because the threat from Chinese distillation is bigger than their rivalry. The economics are stark: if anyone can copy your model for a fraction of the training cost, your $100M+ training investment becomes a subsidy for your competitors. The legal framework to stop it doesn’t exist yet. The political will is building but complicated by antitrust concerns. And the Chinese labs keep getting better at it. This collaboration is less about safety and more about survival — but the safety angle gives it a narrative that resonates with regulators. Expect this to become a major policy fight in 2026.
Sources
- Bloomberg: “OpenAI, Anthropic, Google Unite to Combat Model Copying in China” (April 6, 2026)
- Tech Brew: “OpenAI, Anthropic, Google join forces against China” (April 7, 2026)
- The Information: “OpenAI, Anthropic, Google Agree to Develop Agent Standards Together”
- Frontier Model Forum: Industry collaboration nonprofit (founded 2023)