Anthropic weapon-grade cybersecurity model Mythos was accessed without authorization: how did they do it?

ChainNewsAbmedia

Bloomberg reports that a private forum group allegedly 공개ly announced on the same day that it had broken through restrictions for the security model Mythos, which is part of Anthropic’s security models, by using access permissions held by third-party contractors to successfully enter the system to use the model, raising concerns from the outside world about the safety governance of top-tier AI models.

(Anthropic launched its global cybersecurity initiative Glasswing, so why isn’t the new model Mythos open to the public? )

Mythos was hit by unauthorized access on its first day online

On April 7, Anthropic announced a new network security AI model, Claude Mythos; however, a private online forum group whose identity has yet to be made public reportedly quietly obtained access to the model.

According to reports, this group did not break in using traditional hacking methods. Instead, they leveraged their knowledge of Anthropic’s past model URL formats to reasonably infer Mythos’s online location within the system. The key loophole was a staff member employed by an Anthropic third-party contractor. He already had legitimate authorization to view Anthropic AI models, and the forum group members infiltrated the system through this compliant entry point.

Afterward, the group provided Bloomberg with screenshots and a live demonstration of the actions as proof, and revealed that they have continued using Mythos up to now. However, they emphasized that their purpose was only “to tinker with a new model,” with no intention of carrying out any destructive activity, because they did not want to be discovered.

What is Mythos? Why has it raised concerns from the outside world?

Claude Mythos is an AI model built by Anthropic specifically for enterprise cybersecurity defense. The team defines it as a tool that is “too powerful to be suitable for public release.” Its core capability is to proactively identify security vulnerabilities in digital systems, helping enterprises complete patching before they are attacked.

However, this “defense sword” can also be a “double-edged blade.” Anthropic acknowledged that once Mythos falls into the hands of malicious actors, its capabilities could also be used to launch attacks. Therefore, the company, through a cybersecurity initiative called “Project Glasswing,” only opens Mythos to a small number of major institutions or technology companies that have undergone strict review.

The core assumption behind this closed-off governance mechanism is that trusted partners can ensure that each other’s access permissions will not leak.

(Anthropic Mythos raises regulatory concerns, and executives at Bestent and Powell’s banks hold an emergency meeting)

Anthropic’s response: We’re investigating; there’s no impact

In response, Anthropic said: “We are investigating a report claiming that Claude Mythos Preview was accessed without authorization through a third-party provider environment.” The company emphasized that, at present, it has not found that its own systems have been affected, and the incident is initially believed to be “more likely abuse of access permissions than an external hacking attack.”

Even if users who got early access to Mythos have not engaged in malicious behavior, the incident itself still has cybersecurity experts on high alert. Raluca Saceanu, CEO of the cybersecurity company Smarttech247, pointed out:

Once powerful AI tools are accessed or used outside established governance mechanisms, the risk is not limited to a cybersecurity incident; it could also raise concerns about fraud, cyber abuse, or other malicious uses.

What impact will this have? Weak points in AI security controls

What truly concerns people about this incident is not that someone tried to sabotage it, but the systemic weakness it reveals: when an AI company hands access to highly sensitive models to third-party vendors, any lapse in any link in the entire control network could become a loophole and trigger a crisis.

Now, the Mythos incident serves as a reminder to the entire industry that, as AI capabilities advance rapidly, the design of security architecture cannot rely on trust alone. It also needs institutional resilience that can withstand trust failing. For Anthropic, how to rebuild the public’s confidence in its partner control mechanisms will be a more long-term challenge than the investigation itself.

This article, Anthropic’s weapon-grade cybersecurity model Mythos was accessed without authorization: how did they do it? First appeared on Chain News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Cognition AI Raises Funding at $25B Valuation in Early-Stage Negotiations

Gate News message, April 24 — Cognition AI, an AI coding startup, is in early-stage negotiations for a new funding round that would more than double its valuation to $25 billion, according to sources familiar with the matter. The company aims to raise hundreds of millions of dollars or more as

GateNews24m ago

Anthropic Identifies Three Product-Layer Changes Behind Claude Code Quality Decline, Not Model Issue

Gate News message, April 23 — Anthropic's engineering team confirmed that the Claude Code quality degradation reported by users over the past month stemmed from three independent product-layer changes, not from API or underlying model issues. The three problems were fixed on April 7, April 10, and A

GateNews57m ago

NEC Corporation becomes Anthropic’s first global partner in Japan

NEC announced it will become Anthropic’s first global partner in Japan. The two companies will develop secure, industry-knowledgeable AI solutions for highly regulated industries such as finance, manufacturing, and local governments, and integrate the Claude series into NEC BluStellar. They will focus on data-driven management and transforming customer experience, while also introducing Claude Cowork and SOC integration to enhance cybersecurity protection. To validate effectiveness, NEC launched the Zero Customer Program to conduct comprehensive internal testing of AI agents, and plans to promote Claude deployments globally—building Japan’s largest-scale AI-native engineering CoE.

ChainNewsAbmedia2h ago

Vercel Security Breach Expands to Hundreds of Users; AI Developers at Higher Risk

Gate News message, April 23 — Vercel disclosed on April 19 that its security incident, initially described as affecting a "limited subset of customers," has expanded to a much broader developer community, particularly those building AI agent workflows. The attack may affect hundreds of users

GateNews4h ago

OpenAI launches GPT-5.5: 12M context, AA index tops the chart, and Terminal-Bench rewrites the agent benchmark with 82.7%

OpenAI releases GPT-5.5, focused on agent-style work and enterprise knowledge processing, and also rolls it out in ChatGPT and Codex. Key points include a 12 million token context window and an AA Intelligence Index of 60, leading Claude Opus by 4.7 and Gemini 3.1 Pro; pricing is $5 per million tokens for input and $30 per million tokens for output. Output tokens are reduced by about 40%, while the actual cost increases by about 20%.

ChainNewsAbmedia5h ago

Cluster Protocol Raises $5M to Accelerate CodeXero, Browser-Native AI IDE for EVM

Gate News message, April 23 — Cluster Protocol, an AI deeptech and Web3 infrastructure company, announced it has raised $5 million in a new funding round led by DAO5, with participation from Paper Ventures, JPEG Trading, and Mapleblock Capital, bringing total funding to $7.75 million. The capital wi

GateNews5h ago
Comment
0/400
No comments