OpenAI Co-Founder Karpathy Interview: LLM is a New Kind of Computer and Everything Must Be “Rewritten”

Bitsfull2026/04/30 15:5617395

Summary:

「You can outsource tasks, but you can't outsource understanding.」


OpenAI Co-Founder Andrej Karpathy pointed out in a recent interview that large language models are reshaping computing architecture as a "new type of computer."


On April 29, Andrej Karpathy, a key figure in AI who previously led Tesla's Autopilot development and now plays a significant role at OpenAI, delved deep into the current AI paradigm shift and its profound impact on the software and hardware ecosystem at an event hosted by AI Sent.



Karpathy stated that since last December, he has realized that an agent-centric workflow has truly become feasible, marking the substantive arrival of the Software 3.0 era.


He said: Many people's impressions of AI last year were still based on ChatGPT, but you have to reassess, especially since December – something fundamental has changed.


He also introduced the concept of "agentic engineering" as a new idea, distinguishing it from the "vibe coding" he named last year. The former refers to the continuation and acceleration of quality standards in professional software development.


He bluntly stated that a large amount of existing code and applications "should not exist" in the new paradigm, and that current organizational recruiting processes, development tools, and infrastructure are still designed for humans rather than agents.


The Dawn of Software 3.0: The Power Shift in Computing Architecture


The tech industry is at a crossroads, transitioning from a quantitative to a qualitative change.


December of last year was a key inflection point. Karpathy admitted that faced with the latest AI models, he was profoundly impressed:


The system-generated code snippets are becoming increasingly perfect, and I can't even remember the last time I modified them. I just trust this system more and more... (It has made me) never feel so behind as a programmer.


This impact is a radical disruption of the computing paradigm. Karpathy believes that the market currently underestimates the depth of this change.


He points out that we are saying goodbye to "Software 1.0 (writing code)" and "Software 2.0 (curating datasets to train neural networks)," and officially entering the era of "Software 3.0."


In this new era, the large language model itself is a "new type of computer."


He said: Your current programming has turned into writing prompts, and the content in the context window is what you manipulate as the lever of that large language model acting as an interpreter to perform calculations in the digital information space.


What's even more remarkable is his bold prediction about the future evolution of underlying hardware architecture.


Currently, neural networks still run in a virtualized form on existing computers, but he believes that in the future, this master-slave relationship will reverse: You can imagine that neural networks will become the main process, while the CPU becomes some kind of coprocessor. The neural network will take on the majority of the heavy lifting.


This means that "intelligent compute power," which dominates the entire market's capital expenditure, will further solidify its strategic core position in the future.


Next-Generation Infrastructure: Reframing the "Agent-Native" Ecosystem


As machine takeover of execution and coding occurs, where will human core values and the future infrastructure form lead?


Karpathy bluntly stated: Everything must be rewritten.


The documentation of various frameworks and libraries on the current Internet is still "written for humans," which greatly annoys him.


Karpathy complained: Why do they still tell me what to do? I don't want to do anything. What text should I copy and paste to my AI agent?


The future market's significant opportunity lies in building "agent-first" infrastructure.


In this world, systems are broken down into "sensors" that perceive the world and "actuators" that transform the world, data structures need to be highly readable for the large language model, and machine agents represent individuals and institutions interacting in the cloud.


In such a highly automated future, human core scarcity will return to aesthetics, judgment, and the deepest business understanding.


Karpathy quoted a sentence that he has been chewing on as a conclusion: You can outsource your thinking, but you cannot outsource your understanding.


Agent Engineering: A Productivity Explosion Far Beyond "10x Engineer"


In the dimension of productivity improvement, Karpathy distinguished two core concepts: "Vibe coding" and "Agent Engineering."


He pointed out that "Vibe coding" raises the lower bound of software development for everyone, while "Agent Engineering" aims to maintain the quality upper bound of professional software.


"Agent Engineering" is not just about speed; it requires developers to coordinate those "somewhat error-prone, stochastic but very powerful" AI agents to move forward at full speed without sacrificing quality.


This will also significantly expand the imaginative space of enterprise output.


Karpathy noted: "People used to talk about 10x engineers"; being 10x is no longer enough to describe the speedup you are getting. In my view, those who excel in this field have a peak output far beyond 10x.


Faced with this productivity explosion, the organizational structure and talent selection logic of enterprises must be restructured.


He suggested that companies abandon traditional algorithmic problem-solving interviews and instead assess how candidates can use multiple AI agents to collaboratively build large projects and withstand attacks from other AI agents.


Key Focus Areas for AI Business Implementation


For entrepreneurs and investors who are eager to find AI application scenarios, Karpathy provided a highly practical evaluation framework: verifiability.


Currently, AI's capabilities exhibit an extremely bizarre "sawtooth" pattern.


He gave an example: The most advanced models today can both refactor a 100,000-line codebase and find zero-day vulnerabilities, yet they tell me to walk to the car wash 50 meters away. This is insane.


The reason for this fragmentation is that cutting-edge labs (such as OpenAI, etc.) have poured massive reinforcement learning resources into areas such as "math" and "code," which are results that are easy to verify.


Therefore, as long as it is placed in a verifiable result business scenario, AI can unleash tremendous power.


Karpathy suggests that there are still many high-value, verifiable reinforcement learning environments in the market that have not yet been the focus of top labs. This is a huge blue ocean for startups to fine-tune and monetize.



Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia

OpenAI Co-Founder Karpathy Interview: LLM is a New Kind of Computer and Everything Must Be “Rewritten” - Bitsfull