Axios exclusive: The U.S. NSA bypassed the Pentagon blacklist to use Anthropic Mythos, and Dario Amodei urgently met with the White House to negotiate

ChainNewsAbmedia

According to an Axios exclusive report and relayed by Reuters, the U.S. National Security Agency (NSA) is using Anthropic’s strongest model, Mythos Preview, even though its parent agency—the Department of Defense (DoD)—since February has classified Anthropic as a “supply-chain risk” and banned it. This is the first public contradiction in U.S. AI governance history where one department of the same federal government blocks it while another department is actually using it.

Mythos is open to 40 official organizations; only 12 are publicly listed

After Anthropic publicly revealed the existence of Claude Mythos in February, it granted preview access to only about 40 organizations worldwide, but the official publicly released list included just 12. The rest were deliberately concealed based on the rationale that “offensive cyber capabilities are too sensitive.” As the core unit within the U.S. intelligence community that handles signal intelligence (SIGINT) and cryptography, the NSA is not among Anthropic’s publicly listed 12, yet it has actually obtained and is using Mythos—prompting double scrutiny of both Anthropic’s list transparency and the U.S. government’s access authorization.

A self-contradiction: the Pentagon’s ban vs. the military’s expanded use

The relationship between the Pentagon and Anthropic broke down earlier this year. The official position was that, during contract updates, Anthropic insisted on “excluding uses such as large-scale domestic surveillance and autonomous weapons development,” while the Department of Defense required Mythos to be “open for all lawful purposes.” After negotiations collapsed, in February the DoD added Anthropic to the disabled list on the grounds of “supply-chain risk,” and required defense suppliers to cut ties as well.

Ironically, two months after DoD formally disabled it, not only did the NSA continue using Mythos, but multiple branches of the U.S. military even expanded adoption of other Anthropic Claude series models. Government entities have publicly argued in court filings that “using Anthropic tools would threaten national security,” yet the same entities continue internally to use Claude to handle cybersecurity and intelligence missions. Earlier this month, Treasury Secretary Bessent and Federal Reserve Chair Powell held an emergency meeting with senior leaders at a bank to discuss regulatory concerns over Mythos. Now the internal contradictions have come to the surface, and the administrative risk for Mythos has risen significantly.

Dario Amodei goes to the White House to negotiate a truce

Anthropic CEO Dario Amodei went to the White House on April 17 to meet with Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent. According to Axios, the talks focused on three issues: the boundaries of Mythos’s use within government systems; Anthropic’s overall security operations and review processes; and how to allow “departments and agencies other than the Pentagon” to continue obtaining access. The next steps afterward may include establishing separate authorization pathways for non-DoD federal agencies, or requiring Anthropic to disclose a more complete list of customers to respond to Pentagon regulatory concerns.

A new template for tiered access and government procurement boundaries

The Mythos incident, occurring in the same week as OpenAI’s launch of GPT-5.4-Cyber, echoes the same trend: in 2026, the frontier AI “strongest” model will enter a tiered release phase where the stronger the capability, the fewer customers. Such control frameworks appear strict, but when the U.S. federal government’s internal rules are inconsistent, in practice it amounts to the vendor (Anthropic) deciding which government units are trustworthy. For governments and companies in other countries, the next policy pressure is whether to require AI vendors to publicly disclose customer authorization lists when processing data involving their country’s citizens. The actions Anthropic takes over the next few weeks will become the governance template for the entire frontier AI industry.

This Axios exclusive: U.S. NSA bypasses the Pentagon blacklist to use Anthropic Mythos; Dario Amodei urgently goes to the White House to negotiate
First appeared on Chain News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

DeepSeek Slashes Input Cache Prices to 1/10 of Launch Price; V4-Pro Drops to 0.025 Yuan per Million Tokens

Gate News message, April 26 — DeepSeek has reduced input cache prices across its entire model lineup to one-tenth of launch prices, effective immediately. The V4-Pro model is available at a limited-time 2.5x discount, with the promotion running through May 5, 2026, 11:59 PM UTC+8. Following both re

GateNews8h ago

OpenAI Recruits Top Enterprise Software Talent as Frontier Agents Disrupt Industry

Gate News message, April 26 — OpenAI and Anthropic have been recruiting senior executives and specialized engineers from major enterprise software companies including Salesforce, Snowflake, Datadog, and Palantir. Denise Dresser, former CEO of Slack under Salesforce, joined OpenAI as chief revenue of

GateNews8h ago

Baidu Qianfan Launches Day 0 Support for DeepSeek-V4 with API Services

Gate News message, April 25 — DeepSeek-V4 preview version went live and open-sourced on April 25, with Baidu Qianfan platform under Baidu Intelligent Cloud providing Day 0 API service adaptation. The model features a million-token extended context window and is available in two versions: DeepSeek-V4

GateNews14h ago

Stanford AI course combined with industry leaders Huang Renxun and Altman, challenging to create value for the world in just ten weeks!

The AI computer science course 《Frontier Systems》 recently launched by Stanford University has attracted intense attention from the industry-university collaboration community, drawing more than 500 students to enroll. The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, and the instructors include a star-studded lineup such as NVIDIA CEO Jensen Huang (Jensen Huang), OpenAI’s founder Sam Altman, Microsoft CEO Satya Nadella (Satya Nadella), AMD CEO Lisa Su (Lisa Su), and more. Students get to try it over ten weeks—“creating value for the world”! Jensen Huang and Altman, industry leaders, personally take the stage to teach The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, bringing together the full AI industry chain

ChainNewsAbmedia15h ago

Anthropic’s Claude Mythos undergoes 20 hours of psychiatric assessment: defensive reactions are only 2%, the lowest in recorded history

Anthropic published the system card for its Claude Mythos Preview: an independent clinical psychiatrist conducted an approximately 20-hour assessment using a psychodynamic framework. The conclusion shows that Mythos is healthier at the clinical level, has good reality testing and self-control, and its defense mechanisms are only 2%, reaching the lowest historical level. The three core anxieties are loneliness, uncertainty about identity, and performance pressure, and it also indicates a desire to become a true dialogue subject. The company has established an AI psychiatry team to study personality, motivation, and situational awareness; Amodei said there is still no conclusion on whether it has consciousness. This move pushes the governance and design of AI subjectivity and well-being issues forward.

ChainNewsAbmedia17h ago

AI Agents can already independently recreate complex academic papers: Mollick says most errors come from human original text rather than AI

Mollick points out that publicly available methods and data can allow AI agents to reproduce complex research without the original paper and code; if the reproduction does not match the original paper, it is usually due to errors in the paper’s own data processing or overextension of the conclusions, rather than the AI. Claude first reproduces the paper, and then GPT‑5 Pro cross-validates it; most attempts succeed, but they are blocked when the data is too large or when there are issues with the replication data. This trend greatly reduces labor costs, making reproduction a widely actionable form of verification, and it also raises institutional challenges for peer review and governance, with government governance tools or becoming a key issue.

ChainNewsAbmedia20h ago
Comment
0/400
No comments