OpenAI Co-founder Karpathy Interview: LLM is a New Type of Computer, Everything Must Be "Rewritten"

robot
Abstract generation in progress

Original video title: Andrej Karpathy: From Vibe Coding to Agentic Engineering

Original video source: Sequoia Capital
Original compilation: Bao Yilong, Wall Street Insights

OpenAI co-founder Andrej Karpathy pointed out in a recent interview that large language models are being used as “new computers” to comprehensively reshape computing architectures.

On April 29, AI leader Andrej Karpathy, who once led Tesla Autopilot development and holds a significant position at OpenAI, provided an in-depth analysis of the current technological leap in AI agents and their profound impact on the software and hardware ecosystem at an event hosted by AI Sent.

Karpathy stated that since December last year, he has realized that agent-centered workflows have become truly usable, marking the substantive arrival of the Software 3.0 era.

He said: Many people’s impressions of AI last year still focused on ChatGPT, but you must re-evaluate, especially since December — things have fundamentally changed.

He also introduced the new concept of “agentic engineering” to distinguish it from the “vibe coding” he named last year, where the former refers to the continuation and acceleration of quality standards in professional software development.

He straightforwardly stated that a large amount of existing code and applications “should not exist” under the new paradigm, and that most organizations’ current recruitment processes, development tools, and infrastructure are still designed for humans, not agents.

The dawn of Software 3.0: the transfer of power in underlying computing architecture

The tech industry is at a crossroads from quantitative change to qualitative change.

December last year was a critical turning point, and Karpathy admitted that he experienced a profound shock when facing the latest AI models:

System-generated code blocks are becoming increasingly perfect, I can’t even remember the last time I modified them. I just trust this system more and more… (This makes me) feel more behind as a programmer than ever before.

This impact is a complete upheaval of the computing paradigm. In Karpathy’s view, the market is currently underestimating the depth of this change.

He pointed out that we are bidding farewell to “Software 1.0 (writing code)” and “Software 2.0 (organizing datasets to train neural networks),” and are officially entering the “Software 3.0” era.

In this new epoch, large language models themselves are a “new type of computer.”

He said: Your programming now turns into writing prompts, and the content within the context window is the lever you use to control the large language model acting as an interpreter, allowing it to perform computations in the digital information space.

What’s even more eye-catching is his bold prediction about the evolution of future underlying hardware architecture.

Currently, neural networks still run virtually on existing computers, but he believes that in the future, this host-client relationship will reverse: You can imagine that neural networks will become the main process, while CPUs will turn into some form of co-processor. Neural networks will handle most of the heavy lifting.

This means that the “intelligent computing power” that dominates market capital expenditure will further solidify its strategic core position in the future.

Next-generation infrastructure: reconstructing the “agent-native” ecosystem

When execution and coding are taken over by machines, what will be the core values of humans and the future form of infrastructure?

Karpathy straightforwardly said: Everything must be rewritten.

Currently, the documentation for various internet frameworks and libraries is still “written for humans,” which he finds extremely frustrating.

Karpathy complained: Why do I still need to be told what to do? I don’t want to do anything. Should I just copy and paste some text to my AI agent?

The big market opportunity in the future lies in building “agent-first” infrastructure.

In this world, systems are broken down into “sensors” that perceive the world and “actuators” that transform it. Data structures should be highly readable by large language models, and machine agents represent individuals and organizations to interact in the cloud.

In such a highly automated future, human core scarcity will return to aesthetics, judgment, and the deepest business understanding.

Karpathy quoted a phrase he keeps chewing over as a summary: You can outsource your thinking, but you cannot outsource your understanding.

Agentic engineering: a productivity explosion far beyond “10x engineers”

In the most concerned dimension of productivity enhancement, Karpathy distinguished two core concepts: “Vibe coding” and “Agentic engineering.”

He pointed out that “Vibe coding” raises the lower limit of all-team software development, while “Agentic engineering” aims to maintain the upper limit of professional software quality.

“Agentic engineering” is not just about speeding up; it requires developers to coordinate those “somewhat error-prone, stochastic but extremely powerful” AI agents to move at full speed without sacrificing quality.

This will also greatly expand the imaginative space for enterprise output.

Karpathy stated: “People used to talk about 10x engineers,” but 10x is no longer enough to describe the speedup you gain. In my opinion, those who perform well in this field can produce far more than 10x.

Faced with this productivity explosion, organizational structures and talent screening logic must be reconstructed.

He suggested that companies abandon traditional algorithm-based interview questions and instead assess how candidates utilize multiple AI agents to collaboratively build large projects and resist attacks from other AI agents.

The key to AI commercialization and implementation

For entrepreneurs and investors eager to find AI application scenarios, Karpathy offers a highly practical evaluation framework: verifiability.

Currently, AI capabilities exhibit a very strange “sawtooth” pattern.

He gave an example: The most advanced models today can reconstruct 100k lines of code or find zero-day vulnerabilities, yet they tell me to walk 50 meters to the car wash to wash my car — it’s crazy.

The reason for this disconnect is that leading labs (like OpenAI) have poured massive reinforcement learning resources into results that are easy to verify, such as “mathematics” and “code.”

Therefore, as long as you are in a result-verifiable business scenario, AI can exert enormous power.

Karpathy hinted that there are still many high-value, yet-to-be-focused-on verifiable reinforcement learning environments in the market, which are a huge blue ocean for startups to fine-tune and commercialize.

Link to the original video

Click to learn about Rhythm BlockBeats job openings

Welcome to join the official Rhythm BlockBeats community:

Telegram subscription group: https://t.me/theblockbeats

Telegram discussion group: https://t.me/BlockBeats_App

Twitter official account: https://twitter.com/BlockBeatsAsia

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin