How three Chinese AI firms ran industrial-scale free ride on Claude

by Girls Rock Investing
Anthropic alleges Chinese AI firms used fake accounts to distill Claude at scale, raising security concerns.

Anthropic says it has uncovered what amounts to an industrial-scale “free ride” on one of America’s best AI models.

In a new disclosure, the Claude-maker said three prominent Chinese AI firms, namely DeepSeek, Moonshot AI, and MiniMax, used about 24,000 fake accounts to generate more than 16 million interactions with Claude.

The Chinese AI models aimed to copy their capabilities without authorization. It wasn’t described as a hack or a data breach.

Instead, Anthropic framed it as something harder to spot: competitors learning from Claude’s answers at a massive scale.

What “AI distillation” is and how this was done

The technique at the center of Anthropic’s allegation is “distillation,” a common AI training method where a smaller model is trained on the outputs of a more capable one.

Done legitimately, it’s a way to make cheaper, lighter models; done illicitly, it can become a shortcut that transfers performance without paying the original developer or building comparable research pipelines.

Anthropic said the three firms’ activity violated its terms of service and regional access restrictions, and it characterized the campaign as coordinated and “industrial-scale.”

The company said it has “high confidence” these actors were conducting distillation attacks, and that the traffic disproportionately targeted Claude’s more advanced skills, such as reasoning and coding.

Anthropic claimed that the effort produced more than 16 million interactions via roughly 24,000 fake accounts, naming DeepSeek, Moonshot, and MiniMax.

A key enabler, Anthropic said, was the use of proxy services that resell access to frontier models and can mask who is behind the requests.

It described “hydra cluster” architectures: large networks of fraudulent accounts that spread requests across Anthropic’s API and third-party cloud platforms, making takedowns whack-a-mole rather than a clean shutdown.

In one case, a single proxy network managed more than 20,000 fraudulent accounts simultaneously, mixing distillation traffic with unrelated customer activity to make detection harder.​​

Also Read: Citrini’s ‘thought exercise’ on AI sparks selloff in Visa, DoorDash, others

Beyond Anthropic: Broader implications

Anthropic’s warning lands in a uniquely sensitive moment because Claude is not just a consumer chatbot.

The company says the US Department of Defense awarded it a two-year prototype “other transaction agreement” with a $200 million ceiling to build frontier AI capabilities for national security use.

Claude is being integrated into defense workflows on classified networks via partners such as Palantir.

That backdrop is part of why Anthropic frames distillation as more than a commercial issue: it argues illicitly distilled models may not preserve the safeguards US labs build into their systems, creating national security risks.

Anthropic also tied the issue directly to policy.

“The window to act is narrow,” the company wrote, urging faster, coordinated action from industry and policymakers as distillation campaigns grow in “intensity and sophistication.”

It argued that distillation can undermine US export controls by letting foreign labs “close the competitive advantage” that those controls aim to protect.

The post How three Chinese AI firms ran industrial-scale free ride on Claude appeared first on Invezz

You may also like