TSMC and Samsung give a boost! Tesla AI5 chip completes design approval, targeting mass production in mid-2027

ChainNewsAbmedia

Tesla CEO Elon Musk (Elon Musk) announced on the X platform today that its in-house developed next-generation AI chip, AI5, has completed the final design (tapeout). Performance is five times that of the current dual SoC AI4 setup. The mass-production target is set for mid-2027, and it is expected to be used for Full Self-Driving (FSD) autonomous driving technology and the Optimus humanoid robot program.

Congrats to the @Tesla_AI chip design team on taping out AI5!

AI6, Dojo3 & other exciting chips in work. pic.twitter.com/hm54TdIzBx

— Elon Musk (@elonmusk) April 15, 2026

AI5’s performance comprehensively surpasses AI4, matching Nvidia H100, Blackwell GPUs

AI5, a system-on-chip (SoC) designed by Tesla specifically for real-time AI inference in vehicles and robotics applications, will be used to replace AI4, which has been installed in its vehicles since early 2023. According to what Musk said in a post, AI5’s computing power is about eight times that of AI4, memory capacity increases by nine times, and bandwidth expands by five times. Overall performance is estimated to reach between 2,000 and 2,500 TOPS, representing a major leap compared with AI4’s 300 to 500 TOPS.

Musk revealed that the inference performance of a single AI5 chip is about comparable to the Nvidia H100 GPU, while a dual-chip configuration can rival Nvidia Blackwell processors—but at far lower cost and power consumption than the latter. In terms of architecture design, AI5 is deeply optimized for low-precision inference workloads, pursuing Musk’s so-called “extreme simplification” design philosophy.

Musk personally stays on site to oversee: AI5 is crucial to Tesla’s survival and success

In January, Musk said that the AI5 plan is a life-or-death strategic priority for Tesla. The chip’s performance for Tesla’s own application scenarios will be better than any alternative solution: “Solving the AI5 problem is a survival-level issue for Tesla, which is why I have to personally oversee it, spending every Saturday for several months working on this chip.”

AI5 is at the core of Tesla’s vertically integrated AI strategy. Hardware and software adopt a co-design approach, aiming to maximize the utilization of every circuit resource.

From the product impact perspective, the model parameters used by the current FSD software are about one billion; the next-generation v15 version will expand to about ten billion, fully relying on AI5’s compute power to support it. The Optimus humanoid robot will also gain real-time inference capabilities through AI5, without needing to rely on cloud connectivity to quickly process sensor data.

Partner with TSMC and Samsung for mass production; advance Terafab, a self-built fab, in parallel

In its manufacturing strategy, AI5 adopts a dual-factory parallel model. It simultaneously commissions production at TSMC (TSMC)’s Arizona plant and Samsung (Samsung)’s Texas plant to ensure supply-chain resilience and mass-production capacity. Musk explained: “Although the two foundries use their own process technologies, the chip designs being produced are completely identical.”

At the same time, Tesla is building its own fab, Terafab, in Texas. In the future, it will take on larger-scale production capacity. The company also plans capital expenditures for 2026 of up to $20 billion for this purpose, including investments in Terafab construction and projects such as the Cybercab robotic taxis and Optimus robots.

(Musk announces Terafab rollout in Texas: combining SpaceX and Tesla xAI to accelerate chip manufacturing processes)

AI5 is expected to go into mass production in mid-2027, while the AI6 final design target is set for end of 2026

On the mass-production timeline, small-batch engineering samples for AI5 are expected to come out by the end of 2026 for early Optimus testing or for development vehicles. Mass production for vehicles is targeted for mid-2027. It is worth noting that Tesla’s dedicated robotic taxi, Cybercab, will not wait until AI5 is ready; instead, it will be launched first using the current AI4 hardware.

In subsequent planning, Musk has set an aggressive iteration schedule: the goal is to release a new chip design every 12 months and reach mass production, ultimately compressing the design cycle to nine months. The AI6 final design target is set for December 2026. AI7 and subsequent generations have already entered the planning stage as well, showing Tesla’s determination to build an advantage in its own chips.

This article, “TSMC and Samsung help out! Tesla’s AI5 chip completes the final design, targeting mid-2027 mass production,” first appears on Chain News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Baidu Qianfan Launches Day 0 Support for DeepSeek-V4 with API Services

Gate News message, April 25 — DeepSeek-V4 preview version went live and open-sourced on April 25, with Baidu Qianfan platform under Baidu Intelligent Cloud providing Day 0 API service adaptation. The model features a million-token extended context window and is available in two versions: DeepSeek-V4

GateNews5h ago

Stanford AI course combined with industry leaders Huang Renxun and Altman, challenging to create value for the world in just ten weeks!

The AI computer science course 《Frontier Systems》 recently launched by Stanford University has attracted intense attention from the industry-university collaboration community, drawing more than 500 students to enroll. The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, and the instructors include a star-studded lineup such as NVIDIA CEO Jensen Huang (Jensen Huang), OpenAI’s founder Sam Altman, Microsoft CEO Satya Nadella (Satya Nadella), AMD CEO Lisa Su (Lisa Su), and more. Students get to try it over ten weeks—“creating value for the world”! Jensen Huang and Altman, industry leaders, personally take the stage to teach The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, bringing together the full AI industry chain

ChainNewsAbmedia5h ago

Anthropic’s Claude Mythos undergoes 20 hours of psychiatric assessment: defensive reactions are only 2%, the lowest in recorded history

Anthropic published the system card for its Claude Mythos Preview: an independent clinical psychiatrist conducted an approximately 20-hour assessment using a psychodynamic framework. The conclusion shows that Mythos is healthier at the clinical level, has good reality testing and self-control, and its defense mechanisms are only 2%, reaching the lowest historical level. The three core anxieties are loneliness, uncertainty about identity, and performance pressure, and it also indicates a desire to become a true dialogue subject. The company has established an AI psychiatry team to study personality, motivation, and situational awareness; Amodei said there is still no conclusion on whether it has consciousness. This move pushes the governance and design of AI subjectivity and well-being issues forward.

ChainNewsAbmedia7h ago

AI Agents can already independently recreate complex academic papers: Mollick says most errors come from human original text rather than AI

Mollick points out that publicly available methods and data can allow AI agents to reproduce complex research without the original paper and code; if the reproduction does not match the original paper, it is usually due to errors in the paper’s own data processing or overextension of the conclusions, rather than the AI. Claude first reproduces the paper, and then GPT‑5 Pro cross-validates it; most attempts succeed, but they are blocked when the data is too large or when there are issues with the replication data. This trend greatly reduces labor costs, making reproduction a widely actionable form of verification, and it also raises institutional challenges for peer review and governance, with government governance tools or becoming a key issue.

ChainNewsAbmedia10h ago

OpenAI Merges Codex Into Main Model Starting with GPT-5.4, Discontinues Separate Coding Line

Gate News message, April 26 — OpenAI's head of developer experience Romain Huet revealed in a recent statement on X that Codex, the company's independently maintained specialized coding model line, has been merged into the main model starting with GPT-5.4 and will no longer receive separate

GateNews10h ago

Salesforce to Hire 1,000 Graduates and Interns for AI Products, Raises FY2026 Revenue Guidance

Gate News message, April 26 — Salesforce will hire 1,000 graduates and interns to work on AI products including Agentforce and Headless360 as the company expands its AI software business, CEO Marc Benioff announced on X. The company also raised its fiscal 2026 revenue guidance to between US$41.45 b

GateNews10h ago
Comment
0/400
No comments