Skip to content

Blog

Don't Delegate the Thinking

The pull to hand everything over is real. Tools are capable enough now that you can get something that looks like a complete answer on almost any problem, fast. Reaching for that feels like the efficient choice.

The problem is that fluent output looks like thinking. It reads like a decision was made. But the model didn't decide anything. It generated the most probable continuation of your prompt. The judgment that should have happened before that output didn't happen. It got skipped.


Ted Chiang wrote about this in a different context. He was talking about writing, but the observation transfers: your first draft isn't an unoriginal idea expressed clearly. It's an original idea expressed poorly, and it's accompanied by your dissatisfaction — your awareness of the gap between what it says and what you wanted it to say. That dissatisfaction is doing real work. It's the signal that drives revision.

When you start with text a model generated, that dissatisfaction is gone. You're editing, not thinking. You're reacting to what's in front of you rather than working from an intention that was yours to begin with.

The same thing happens with architecture decisions, system design, and technical tradeoffs. If the model makes those calls and you review the output, you're not the author of the decision. You're the approver. Those are different jobs, and only one of them builds judgment over time.


There's plenty worth delegating: drafting, scaffolding, boilerplate, repetition, the parts of implementation that are mechanical once you know what you want. AI compresses those meaningfully, and the compression is real.

What's worth keeping: the thinking that happens before any of that. What should this system do? What are the constraints? What does good look like here? What am I willing to be responsible for? Those questions have to be yours, answered before the model gets involved. Otherwise you're not engineering. You're supervising.


My operating principle, stated plainly: delegate the expression, keep the thinking. Let AI handle the scaffolding of ideas I've already formed. Use it to move faster on decisions I've already made. When I notice I'm reaching for it because I don't want to think through something hard, that's the sign to close the chat window and think it through.

The developers getting the most out of these tools aren't the ones offloading the most. They're the ones who've figured out which parts of their work are genuinely mechanical and which parts require them specifically, and they're protecting the second category.

Vibe Coding Is Real, But Vibe Thinking Isn't

Andrej Karpathy named it in early 2025: "There's a new kind of coding I call vibe coding, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists." Accept all diffs. Don't read them. When you get an error, paste it back in with no comment. The code grows beyond your comprehension. It mostly works.

He was describing throwaway weekend projects, and he said as much. But the term escaped that context immediately, and now "vibe coding" gets used to describe everything from a Saturday prototype to a serious development approach. Those are not the same thing.


For a weekend project or a proof of concept, vibe coding is fine. The goal is to find out if something is worth building, not to build the thing. If the code is comprehensible to you, that's overhead you don't need yet. Move fast, see what you learn, throw it away or start over properly if it turns out to matter.

The failure mode is treating that mode as a development philosophy. Production systems evolve. Other people have to work in them. Requirements change in ways nobody predicted. Code that grew beyond your comprehension during a weekend session will not hold up to any of that. The question isn't whether it works now. It's whether it's something you can maintain, explain, and build on.


The name is the tell. It's called vibe coding, not vibe thinking. The vibe is in the coding part specifically, because that's what got offloaded. The thinking didn't go anywhere.

Architecture, system design, what to build and what not to build, how the pieces fit together, what happens when this breaks at 2am — none of that is captured in "vibe." The developers who are genuinely productive with AI aren't thinking less. They're thinking about different things. One engineer put it plainly: just because he doesn't write the code anymore doesn't mean he doesn't think hard about architecture, dependencies, and how to delight users. Using AI meant expectations of what to ship went up, not that the thinking requirement went down.


Simon Willison reframed this usefully. He pointed out that the skills needed to manage AI agents well map closely to the skills needed to manage engineers: clear task definition, knowing when to course-correct, understanding what good output looks like, reviewing work critically rather than accepting it. Almost all of those are characteristics of senior engineers.

If vibe coding means delegating typing, that's fine. If it means delegating judgment, you're not a developer using AI. You're a rubber stamp on a stochastic process.


The vibe coder's career path is the version of this that concerns me most. A junior developer who spends two years accepting diffs without understanding them hasn't spent two years building experience. They've spent two years getting output. Those aren't the same thing, and the difference shows up the first time the model is wrong and they can't tell.

The Junior Developer Pipeline Problem

IT and software engineering employment is down 6% for workers aged 22 to 25. For workers aged 35 to 49, it's up 9%. Those numbers are from recent labor data, and they're moving in the direction you'd expect if AI is compressing entry-level work. The market is already repricing.

This is the part of the AI productivity story that gets the least attention. The conversation is mostly about what experienced engineers can do with these tools. The quieter question is what happens to the engineers who were supposed to become experienced.


Learning to code involves a particular kind of struggle that's hard to shortcut. You hit a problem you don't understand, you root around in it blindly, you form wrong hypotheses and test them, and eventually something clicks that wouldn't have clicked any other way. That process is uncomfortable, and it's also how fundamentals get built.

AI eliminates that phase almost entirely. A junior developer who reaches for a model every time they hit friction isn't developing the tolerance for not knowing. They're getting answers. Those are different things. The answers might even be correct. Correct answers don't build the mental models that let you recognize when an answer is wrong.

The result is what engineering managers are starting to describe: juniors who can produce working code but can't explain it, who struggle with edge cases and unexpected behavior, and who are difficult to mentor because they've bypassed the failures that normally create teachable moments. The code works until it doesn't, and when it doesn't, they don't have the foundation to debug it.


There's a direct cost to the seniors on their teams too. Reviewing AI-generated code from a junior who doesn't fully understand it requires more scrutiny than reviewing code they wrote themselves. The junior moves faster. The senior pays the tax. That dynamic is showing up quietly across teams that haven't named it yet.


The new bar for entry-level hiring is shifting toward what some people call high agency: the ability to figure things out without being handed a ticket, debug AI output rather than just generate it, and take responsibility for what ships. The average developer waits for instructions. The one worth hiring solves the problem.

That bar is actually higher than the old one. It requires judgment and initiative that used to develop gradually over the first few years of a career. The question is whether developers are still getting those years, or whether they're spending them getting output instead of getting experience.


The pipeline argument is simple and tends to get ignored: if you stop hiring junior developers, you eventually stop producing senior developers. AI can accelerate an experienced engineer's output. It can't manufacture the years of context, failure, and accumulated judgment that makes someone senior. That still takes time, and it still requires doing hard things without a safety net.

Organizations cutting junior headcount in response to AI productivity gains are making a short-term calculation with a long-term cost that won't appear on any dashboard until it's too late to fix quickly.