Amazon adds more for Anthropic with $25 billion: 5GW of computing power, with a $100 billion-plus AWS commitment

ChainNewsAbmedia

According to an official Anthropic announcement and an official Amazon press release, the two companies expanded their strategic partnership on 4/20: Amazon added an investment of up to $25 billion in Anthropic, while Anthropic committed that over the next ten years its spending on AWS would exceed $100 billion, and that it would obtain up to 5GW of additional compute capacity for training and deploying Claude models.

This is the second wave of major expansion in Anthropic’s compute arms race, following the 4/7 collaboration between Anthropic and Google and Broadcom that secured 3.5GW of TPU compute, and it is also Amazon’s largest single commitment to an AI company. As a result, Anthropic’s valuation is locked at $380 billion.

Investment structure: $5 billion upfront, $20 billion tied to milestones

Amazon’s investment this time is split into two phases: $5 billion will be funded immediately on the day of the announcement, and an additional up to $20 billion will be released in tranches after it is tied to “specific commercial milestones.” Together with the $8 billion previously invested, Amazon’s cumulative investment in Anthropic will reach an upper limit of $33 billion.

The $5 billion in this round is taken at Anthropic’s latest valuation of $380 billion, which is also the first time this valuation has been confirmed by a first-tier investor through a new agreement.

Anthropic commits to $100 billion in AWS spending for the next decade

As consideration for the partnership, Anthropic has committed that its spending on AWS over the next ten years will exceed $500k. The scope covers Trainium custom AI chips for current and future generations, as well as “tens of millions of cores” Graviton general-purpose compute CPUs.

In the announcement, Andy Jassy (Amazon CEO) said: “Anthropic’s commitment to run large language models on AWS Trainium over the next ten years reflects our shared progress on the custom chip path.” Dario Amodei (Anthropic CEO) added: “Users are telling us that Claude is becoming increasingly important to their work, and we must build the infrastructure that can keep up with growth in demand.”

5GW compute roadmap: Trainium2, 3, and 4 all-series locked in

The agreement covers three generations of chips: Trainium2, Trainium3, and Trainium4, and Anthropic will also retain the option to purchase subsequent generations of custom chips. On the schedule, large-scale Trainium2 capacity will come online in 2026 Q2. Large-scale Trainium3 capacity will be rolled out and opened progressively before the end of the year. For the full year, cumulative Trainium2 and Trainium3 will total nearly 1GW.

Project Rainier is the flagship project under both parties’ existing collaboration. The training cluster has currently deployed about 500k Trainium2 chips, serving as the primary infrastructure for training Claude models.

Revenue from $9 billion to $30 billion: the surge in Anthropic demand is the driving force behind the negotiations

In the announcement, Anthropic rarely disclosed its own financial situation. Its annualized revenue for the current year has already surpassed $30 billion. Compared with $9 billion at the end of 2025, it more than tripled within half a year. The number of enterprise customers running Claude on AWS has also exceeded 100k, and Claude is one of the model families with the highest usage on Amazon Bedrock.

Anthropic also acknowledged that the surge in demand has put pressure on infrastructure during peak hours, affecting availability and performance. This is precisely the direct motivation behind the large-scale compute expansion this time. The recent tokenization token dispute for Claude Opus 4.7 and adjustments to usage limits are also related to the reality that “infrastructure has become constrained.”

Three-way compute lock-in: AWS, Google, and in-house chips move forward in parallel

Along with the 3.5GW TPU compute capability secured through the earlier April announcement of Anthropic’s collaboration with Broadcom and Google, Anthropic is currently simultaneously locking in both the AWS Trainium and Google TPU custom-chip roadmaps, while retaining the long-term option for its own custom accelerators. This contrasts with OpenAI’s path of relying mainly on Microsoft Azure and only expanding into AWS more recently.

For Taiwan’s semiconductor supply chain, the large-scale expansion from Trainium2 through Trainium4 means that in the next three to five years Marvell, TSMC advanced packaging, and HBM memory suppliers will continue to take on AWS custom chip orders.

Compute muscle behind the IPO race

This announcement also reinforces the foundation for Anthropic’s IPO narrative. Recent reports disclosed that OpenAI’s annualized revenue has broken through $25 billion, and that in preparation for an IPO, Anthropic—chasing at $19 billion—shows that the revenue gap between the two is narrowing. Official announcements indicate that Anthropic’s annualized revenue has reached $30 billion, and the gap is further narrowing.

For investors, the significance of Amazon’s commitment this time is not in the single-transaction amount, but in deep lock-in such as “$100 billion in AWS spending over the next ten years,” which gives the market a clear anchor for Anthropic’s compute supply and long-term cost structure. Project Glasswing high-threshold model Mythos and the subsequent training compute for the Claude 4.x series will be supported by these new 5GW of capacity.

This article “Amazon boosts investment in Anthropic by $25 billion: 5GW compute, $100B AWS lock-in” first appeared on 鏈新聞 ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Baidu Qianfan Launches Day 0 Support for DeepSeek-V4 with API Services

Gate News message, April 25 — DeepSeek-V4 preview version went live and open-sourced on April 25, with Baidu Qianfan platform under Baidu Intelligent Cloud providing Day 0 API service adaptation. The model features a million-token extended context window and is available in two versions: DeepSeek-V4

GateNews3h ago

Stanford AI course combined with industry leaders Huang Renxun and Altman, challenging to create value for the world in just ten weeks!

The AI computer science course 《Frontier Systems》 recently launched by Stanford University has attracted intense attention from the industry-university collaboration community, drawing more than 500 students to enroll. The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, and the instructors include a star-studded lineup such as NVIDIA CEO Jensen Huang (Jensen Huang), OpenAI’s founder Sam Altman, Microsoft CEO Satya Nadella (Satya Nadella), AMD CEO Lisa Su (Lisa Su), and more. Students get to try it over ten weeks—“creating value for the world”! Jensen Huang and Altman, industry leaders, personally take the stage to teach The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, bringing together the full AI industry chain

ChainNewsAbmedia3h ago

Anthropic’s Claude Mythos undergoes 20 hours of psychiatric assessment: defensive reactions are only 2%, the lowest in recorded history

Anthropic published the system card for its Claude Mythos Preview: an independent clinical psychiatrist conducted an approximately 20-hour assessment using a psychodynamic framework. The conclusion shows that Mythos is healthier at the clinical level, has good reality testing and self-control, and its defense mechanisms are only 2%, reaching the lowest historical level. The three core anxieties are loneliness, uncertainty about identity, and performance pressure, and it also indicates a desire to become a true dialogue subject. The company has established an AI psychiatry team to study personality, motivation, and situational awareness; Amodei said there is still no conclusion on whether it has consciousness. This move pushes the governance and design of AI subjectivity and well-being issues forward.

ChainNewsAbmedia5h ago

AI Agents can already independently recreate complex academic papers: Mollick says most errors come from human original text rather than AI

Mollick points out that publicly available methods and data can allow AI agents to reproduce complex research without the original paper and code; if the reproduction does not match the original paper, it is usually due to errors in the paper’s own data processing or overextension of the conclusions, rather than the AI. Claude first reproduces the paper, and then GPT‑5 Pro cross-validates it; most attempts succeed, but they are blocked when the data is too large or when there are issues with the replication data. This trend greatly reduces labor costs, making reproduction a widely actionable form of verification, and it also raises institutional challenges for peer review and governance, with government governance tools or becoming a key issue.

ChainNewsAbmedia8h ago

OpenAI Merges Codex Into Main Model Starting with GPT-5.4, Discontinues Separate Coding Line

Gate News message, April 26 — OpenAI's head of developer experience Romain Huet revealed in a recent statement on X that Codex, the company's independently maintained specialized coding model line, has been merged into the main model starting with GPT-5.4 and will no longer receive separate

GateNews8h ago

Salesforce to Hire 1,000 Graduates and Interns for AI Products, Raises FY2026 Revenue Guidance

Gate News message, April 26 — Salesforce will hire 1,000 graduates and interns to work on AI products including Agentforce and Headless360 as the company expands its AI software business, CEO Marc Benioff announced on X. The company also raised its fiscal 2026 revenue guidance to between US$41.45 b

GateNews8h ago
Comment
0/400
No comments