Industry news roundup #3
Counter-order comrades: vibe coding is off, back to rigorous development (but still AI-assisted)

This is not really industry news, but it might have an impact on the AI world. A few months after his famous tweet that introduced the notion of vibe coding, Karpathy is backtracking a bit.
To be fair, when he described vibe coding he did specify that it was “not too bad for throwaway weekend projects”. However, it seems that the entire world didn’t read that part and jumped to the conclusion that you can vibe code your way out of any software project, with profound implications for the future of the software-engineer role. If anyone with no engineering background can vibe code software of any complexity, the role of the software engineer would be dramatically downsized, if not eliminated entirely.
In his latest tweet, Karpathy clarified that for “code he actually and professionally cares about,” vibe coding doesn’t cut it. You can’t just blindly commit and push to prod whatever code happens to compile without errors. You need to review, inspect, debug, double-check, test … all activities that require solid software-engineering fundamentals.
One interesting passage is the following:
The emphasis is on keeping a very tight leash on this new over-eager junior intern savant with encyclopedic knowledge of software, but who also bullshits you all the time, has an over-abundance of courage and shows little to no taste for good code
The most advanced AI products for the SDLC are now in the agentic phase. Simple code completion is too close to actual programming. The agentic vision raises the level of abstraction so that instructions are given in natural language directly. Agentic mode can be synchronous or asynchronous.
Synchronous mode is like a chat experience. The developer has a conversation with the agent, which produces code. The developer inspects the code, asks the agent to make changes, and iterates until it’s acceptable and ready to commit. You rinse and repeat with the next thing to implement. This mode is more a lateral improvement of the current software engineering practice: the individual developer is more productive, but the boost is limited by the tight, synchronous interaction between developer and AI agent, who form now a pair. It feels more like 2x than 10x.1
Asynchronous mode is like Deep Research in ChatGPT. The developer assigns a task to an agent, which then goes off to implement the solution. GitHub Copilot’s async agent, Padawan, works this way: you create an issue with specs, assign it to Copilot, and wait for a PR that (hopefully) does what you asked.
In theory, async agents are the real game-changer. A developer isn’t constrained to a one-to-one pairing, and they can command an army of agents working in parallel. The true 10x or even 100x productivity leap only comes with async agents, and that would truly revolutionize the SDLC.
However, in async mode, keeping tight control defeats the purpose of horizontal scaling via parallel execution. Why assign a series of tiny, narrow-scope issues to Padawan in sequence? At that point, interactive development with a sync agent is more efficient.
In async mode we’re back to vibe coding: the agent returns a big diff, a complete solution that’s much harder to inspect because of its size, and you are forced to accept it at face value, with all the associated risks. It’s no coincidence Cursor originally called this YOLO mode (now auto-run).
With his latest viral tweet, Karpathy has drawn a divide that could define the industry: vibe coding and async agents for low-risk throwaway projects and prototypes, vs AI-assisted development and sync agents for the serious stuff. As always, only time will tell.
Do you like this post? Of course you do. Share it on Twitter/X, LinkedIn and HackerNews
I’m using 2x and 10x in a ballpark sense. Technically 2x is a 100% increase in productivity, which would be an incredibly big boost never seen in human history.