Anthropic Accuses Chinese AI Labs of "Stealing" Data - ForkLog: Cryptocurrency, AI, Singularity, Future

риски ИИ. ИИ предоставляет риски для компаний, бизнеса AI risks# Anthropic Accuses Chinese AI Labs of “Data Theft”

Anthropic has accused three Chinese AI startups — DeepSeek, Moonshot, and MiniMax — of conducting a large-scale campaign using Claude to improve their own models.

Labs from China generated over 16 million interactions with the chatbot through approximately 24,000 fraudulent accounts, violating terms of use and regional restrictions.

“We have high confidence that each campaign is linked to a specific company based on correlations of IP addresses, query metadata, infrastructure features, and confirmations from industry partners. They targeted the most unique capabilities of Claude: agent reasoning, tool use, and programming,” — said Anthropic.

The companies used distillation — training a less powerful neural network based on the outputs of a stronger one.

This is a widely used and legitimate method. Leading AI labs regularly distill their own models to create compact and inexpensive versions for clients.

“However, it can also be used illegally: competitors improve capabilities by leveraging others’ LLMs in a short time and at lower costs compared to developing their own,” — stated in Anthropic’s blog.

The company emphasized that the window for responding to such “theft” is narrow, and the threat extends beyond a single company or region. Addressing it will require quick, coordinated action from the industry, regulators, and the global AI community.

Why This Is Dangerous

Anthropic explained the risks of such an approach. Illegally distilled models do not retain necessary safety mechanisms — this poses a threat to national security.

American companies are implementing systems to prevent the use of AI in developing biological weapons, malicious cyberattacks, and other dangerous activities. Models created through illegal distillation do not have these restrictions.

Foreign labs could integrate unprotected capabilities into military and intelligence systems, enabling authoritarian governments to use advanced AI for cyberattacks, disinformation, and mass surveillance, the company added.

Ways to Fight Back

Anthropic experts support export restrictions to maintain U.S. leadership in AI. According to them, distillation attacks undermine these measures by allowing foreign labs to close the technological gap.

“Without transparency, the rapid progress of Chinese labs is mistakenly seen as evidence of export restrictions’ ineffectiveness. In reality, their achievements largely depend on extracting capabilities from American models, and scaling such approaches requires access to advanced chips,” — said the company’s blog.

Anthropic outlined its own methods of combating this:

  • Improving pattern detection systems for distillation;
  • Sharing technical indicators with other labs and cloud providers;
  • Strengthening verification of educational and research accounts;
  • Applying countermeasures that reduce the effectiveness of illegal distillation.

This is not the first such accusation. In January 2025, shortly after the explosive release of DeepSeek-R1, the company was suspected of data theft from OpenAI.

Continuing the Fight with the Pentagon

Anthropic CEO Dario Amodei will meet with U.S. Secretary of Defense P. Hegseth at the Pentagon to discuss ways to use the company’s AI models for military purposes.

Recently, disagreements have arisen — Anthropic opposes using AI for mass surveillance of U.S. citizens and autonomous weapons development. The Department of Defense has made it clear they intend to use LLMs “for all lawful scenarios” without restrictions.

It even reached the point where the Pentagon announced the possibility of terminating the contract with Anthropic.

AI Vulnerability Scanner

Shares of leading publicly traded cybersecurity companies fell after the launch of Anthropic’s Claude Code Security — an AI vulnerability scanner for code.

The company’s website states that the new service “analyzes the entire codebase for vulnerabilities, verifies each finding to minimize false positives, and offers fixes.”

Claude conducts analysis “like an experienced security researcher”: understanding context, tracking data flows, and detecting vulnerabilities.

According to VentureBeat, Claude Opus 4.6 identified over 500 critical vulnerabilities that had persisted for decades despite expert reviews.

Five of the largest U.S. publicly traded IT security companies saw their stock prices decline by double digits over the past five days amid the AI competition:

  • Palo Alto Networks — -14%;
  • CrowdStrike — -18%;
  • Fortinet — -12%;
  • Cloudflare — -18%;
  • Zscaler — -19%.

Chart of Palo Alto Networks stock prices. Source: Yahoo Finance. Wedbush analysts said the sell-off is related to concerns about the so-called AI Ghost Trade. They believe the market’s reaction is mistaken, and Palo Alto, CrowdStrike, and Zscaler will prove their effectiveness in 2026.

Recall that in February, OpenAI, together with Paradigm, introduced EVMbench — a benchmark for evaluating AI agents’ ability to identify, fix, and exploit vulnerabilities in smart contracts.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)