The future isn’t coming. It’s already here—and we need to talk about it honestly.

Last month, Adam Wolff—engineer at Anthropic, former Head of Engineering at Robinhood, and one of the people whose work led to React—posted something that made me stop scrolling:

“I believe this new model in Claude Code is a glimpse of the future we’re hurtling towards, maybe as soon as the first half of next year: software engineering is done. Soon, we won’t bother to check generated code, for the same reasons we don’t check compiler output.”

That’s not a hot take from a LinkedIn influencer. That’s someone building the tools, with decades of industry credibility, making a prediction about 2026.

And we need to reckon with it.

The Hypocrisy We Don’t Talk About

Here’s what I find strange about the industry’s collective discomfort with “vibe coding”: most of us have been doing a version of it for years.

We’ve copied code wholesale from Stack Overflow without understanding every line. We’ve applied patches buried deep in GitHub issue threads—fix first, comprehend later. We’ve pulled solutions from forums, blog posts, and that one answer from 2014 that somehow still works.

That was acceptable. That was resourcefulness.

But when a developer uses an LLM to solve the same problem faster, suddenly it’s shameful? Suddenly they’re not a “real” engineer?

The difference isn’t the act—it’s the tool. And that distinction is becoming harder to justify.

The Job Titles Are Already Here

This isn’t theoretical. Companies are hiring for this explicitly now.

A Y Combinator-backed startup recently advertised for a “Vibe Coder – AI Engineer” at $150,000 plus equity. The job posting was direct: “At least 50% of the code you write right now should be done by AI; vibe coding experience is non-negotiable.”

Job boards are filling with titles like “AI-First Developer,” “Vibe Engineer,” and “AI-Powered Developer.” Some explicitly state: no traditional CS degree required—just the right mindset.

Meanwhile, in most companies, developers are using these same tools but won’t admit it. They’re vibe coding in the shadows, then presenting the output as their own work. We’ve created an environment where the most productive approach has to be hidden.

That’s absurd.

The Compiler Analogy Is More Apt Than You Think

Wolff’s comparison to compiler output deserves unpacking.

When was the last time you checked the assembly your code compiled to? You trust the compiler. You trust the abstraction. The output isn’t human-readable, and nobody considers that a problem.

The argument for human-readable code has always been: humans need to read it, debug it, maintain it. But what happens when the thing that wrote the code is also better at debugging and maintaining it?

This question was anticipated in the early days of LLMs. If human programming languages exist in human-readable form only to be read by humans, but are ultimately compiled to binary anyway—what happens when machines generate them and machines consume them?

Does the intermediate layer need to remain human-readable at all?

We’re already seeing DeepMind’s AlphaDev discover sorting algorithms by optimising assembly instructions directly—creating solutions that work but that humans struggle to interpret. The code that runs our world may increasingly be code that no human wrote or can easily understand.

Perhaps the future involves programming languages designed specifically for AI-to-AI communication. Written in formats optimised for generation and verification, with decompilers that produce human-readable explanations when (if) we need them.

What Actually Changes

I’m not arguing that understanding systems becomes irrelevant. If anything, the opposite.

When AI handles the implementation, the engineer’s value concentrates elsewhere: architecture, system design, understanding users, defining requirements, coordinating across teams. The skills that were always the hard parts.

As Wolff put it in a follow-up: “Coding was always the easy part. The hard part is requirements, goals, feedback—figuring out what to build and whether it’s working.”

The role evolves from writing the code to orchestrating the systems that write it. From implementation to direction. From typing to thinking.

The Industry Needs to Get Ready

Here’s my concern: we’re not preparing for this.

Instead of developing frameworks for AI-assisted development, we’re pretending it isn’t happening. Instead of rethinking code review, testing, and validation for AI-generated code, we’re hoping the question goes away.

It won’t.

Y Combinator reports that 25% of startups in their Winter 2025 batch had codebases that were 95%+ AI-generated. Microsoft and Google both estimate 20-30% of their code is now AI-generated. These numbers are moving in one direction.

The companies that thrive will be the ones that build honest cultures around these tools—where using AI assistance isn’t stigmatised but systematised. Where the question isn’t “did you use AI?” but “how do we validate and ship this effectively?”

How Do We As An Industry Prepare?

Stop shaming vibe coding. If you’re fine with Stack Overflow, you should be fine with this. The intellectual honesty we owe ourselves demands consistency.

Start building frameworks. How do we review AI-generated code? What testing strategies make sense? What new skills should we develop? These are solvable problems, but only if we acknowledge they exist.

Prepare your teams. The shift is happening whether we like it or not. Leaders who prepare their engineers for this transition will have a significant advantage over those who pretend it isn’t coming.

We’re at an inflection point. Software engineering isn’t dying—it’s transforming. The question is whether we’ll lead that transformation or get dragged along by it.

I know which I’d prefer.