At Sequoia's AI Ascent 2026, Andrej Karpathy — co-founder of OpenAI, former Tesla AI director, and founder of Eureka Labs — opened with a surprising admission: he's never felt more behind as a programmer. Not because he's lost his skills, but because AI-powered coding tools are changing what programming means faster than anyone can keep up.

Karpathy laid out a three-phase framework for understanding this shift: vibe coding, agentic engineering, and Software 3.0. It's the clearest mental model anyone has offered for where software development is heading — and what skills will matter most.

Key Takeaway

Vibe coding lets anyone build with prompts. Agentic engineering lets professionals build with AI agents while maintaining quality. Software 3.0 is when neural networks become the programming layer itself. We're currently transitioning from phase 1 to phase 2.

What Is Vibe Coding?

Vibe coding is Karpathy's term for the current phase where anyone — technical or not — can build software by describing what they want in natural language. You tell an AI "build me a landing page with a hero section, pricing table, and signup form" and it generates working code.

Vibe coding raised the floor. A product manager can now prototype an app. A designer can build a functional website. A student can create a tool that solves a real problem. The barrier to building dropped from "years of coding experience" to "ability to describe what you want."

Tools that enable vibe coding: Cursor, Bolt, v0, Replit, and to some extent ChatGPT and Claude with code generation.

The limitation: Vibe coding produces code that works, but often with subtle design flaws. Karpathy's example: an AI-generated app that matched Stripe payments to Google accounts through email addresses instead of persistent user IDs. The code ran fine. The architecture was wrong. A junior developer wouldn't catch it. An experienced developer would spot it immediately.

This is what Karpathy calls "jagged intelligence" — AI capability is not evenly distributed. Models spike in areas where training data, rewards, and verification loops exist, and fail unpredictably in others.

What Is Agentic Engineering?

Agentic engineering is the next phase — using AI agents for professional-quality development while maintaining human oversight over architecture, design decisions, and quality standards.

Where vibe coding is "describe what you want and accept what the AI gives you," agentic engineering is "use AI agents to execute while you direct, review, and course-correct." The human role shifts from writing code to reviewing code, from implementing to architecting, from coding to engineering.

Tools that enable agentic engineering: Claude Code, OpenAI Codex, Hermes Agent, and Cursor's agent mode. These tools don't just generate code — they plan multi-step tasks, execute them, test the results, and iterate.

Aspect Vibe Coding Agentic Engineering
Who does itAnyone — no coding requiredDevelopers with engineering judgment
AI's roleGenerate code from descriptionsExecute multi-step plans autonomously
Human's roleDescribe what you wantDirect, review, course-correct
Output qualityWorks, but may have design flawsProduction-ready with oversight
Key skillPrompting abilityTaste, judgment, architecture knowledge
Verification"Does it run?""Is it correct, secure, maintainable?"

What Is Software 3.0?

Karpathy's biggest concept. His framework for the evolution of software:

Software 1.0: Humans write every line of code. Explicit instructions. Deterministic behavior. This is traditional programming.

Software 2.0: Neural networks learn from data. Instead of writing rules, you provide examples and the model learns the patterns. Computer vision, speech recognition, recommendation systems. Karpathy coined this term in 2017.

Software 3.0: Large language models become a programmable layer. Instead of specifying every instruction, developers describe goals, constraints, and context. The model interprets and executes. The interface is natural language. The program is the prompt.

In Software 3.0, the LLM is both the runtime and the programming language. You don't write code that calls an LLM — the LLM IS the code. The prompt IS the program. And the developer's job is to design the prompt, curate the context, and verify the output.

This is why context engineering is becoming the most important skill in AI. If the prompt is the program, then the context surrounding it determines the program's behavior. Managing that context is the new form of software engineering.

---

📬 Getting value from this? We publish weekly on the skills that matter in AI. Get it in your inbox →

---

Why Does Taste Matter More Than Syntax Now?

Karpathy's most quotable line from the talk: "You can outsource thinking. You cannot outsource understanding."

AI agents can write code, generate drafts, call tools, and execute tasks. But they can't tell you whether the result is good. That requires judgment — the ability to recognize when something is technically correct but architecturally wrong, when code runs but doesn't scale, when a feature works but creates a security vulnerability.

Karpathy calls this "taste." It's the ability to look at AI-generated output and know whether it's right. Not just "does it compile" but "would an experienced engineer build it this way?"

The practical implication: developers who understand fundamentals deeply will thrive. Developers who only know surface-level coding will struggle, because AI does surface-level coding better than they do. The value moves up the stack — from implementation to architecture, from syntax to judgment, from writing code to evaluating code.

What Does "Jagged Intelligence" Mean?

This is Karpathy's term for AI's uneven capability distribution. Models can perform extraordinarily well on some tasks while failing unexpectedly on others. The intelligence isn't smooth — it's jagged.

Key insight: AI automates what can be verified. Tasks with clear feedback loops — does the code pass tests? does the math check out? does the output match the expected format? — are where AI excels. Tasks requiring judgment in ambiguous situations — is this the right architecture? is this user experience intuitive? does this strategy make sense? — are where AI still struggles.

For developers, this means: the more measurable the output, the more useful AI tools become. Coding, testing, data analysis, and structured content creation are highly automatable. Product design, system architecture, and strategic decision-making still require human judgment.

What Skills Do You Need for Agentic Engineering?

Based on Karpathy's framework, the skills that matter most in the agentic engineering era:

1. Deep understanding of fundamentals. When AI gets better, the temptation is to learn less. Karpathy argues the opposite. Understanding becomes the bottleneck. You need enough depth to direct the system, know what to ask, what to inspect, what to reject.

2. System design and architecture. AI writes functions. You design systems. Understanding how components interact, where failure points exist, and how to build for scale becomes the core developer skill.

3. Verification and review. The ability to read AI-generated code critically — not just "does it work" but "is this the right approach?" This requires the same deep knowledge that lets you write code, applied differently.

4. Prompt and context engineering. If the prompt is the program, writing good prompts is programming. The ICCSSE framework (Identity, Context, Constraints, Steps, Specifics, Examples) applies directly to agentic engineering — you're giving the agent a clear specification, just like you'd write a design doc for a human developer.

5. Tool orchestration. Knowing which AI tool to use for which task. Claude Code for writing features, Cursor for editing, Hermes Agent for automation, Copilot for suggestions. The best engineers combine multiple tools, not rely on one.

To sharpen your prompting for AI coding tools, try the free Prompt Optimizer — it applies structured frameworks to any prompt. And for a practical guide to the top AI coding tools available now, see our Claude Code vs Codex comparison.

---

📬 Want more like this? We cover the ideas that shape how AI practitioners work, weekly. Subscribe free →

---

Frequently Asked Questions

Is vibe coding dead?

No. Vibe coding is here to stay — it's the entry point for non-developers to build software. Agentic engineering is the professional evolution for people who need production-quality output. Both coexist. Think of vibe coding as sketching and agentic engineering as architecture.

Do I need to learn to code if AI writes code for me?

Yes, but differently. You don't need to memorize syntax. You need to understand how software works — data structures, system design, security principles, performance tradeoffs. The knowledge that lets you evaluate whether AI-generated code is correct and appropriate.

When will Software 3.0 be mainstream?

It's already happening in narrow domains — coding assistants, content generation, data analysis. Full Software 3.0 (where most applications are LLM-native) is 3-5 years away based on current trajectories. The transition will be gradual, not sudden.

What's the difference between agentic engineering and using ChatGPT for coding?

ChatGPT generates code snippets in response to prompts. Agentic engineering uses autonomous agents (Claude Code, Codex, Hermes) that plan multi-step tasks, execute them across your actual codebase, test the results, and iterate — all with minimal human intervention. It's the difference between asking for directions and hiring a driver.

Disclosure: Some links in this article are affiliate links. We only recommend tools we've personally tested and use regularly. See our full disclosure policy.