Skip to content

Faster Output Demands a Higher Quality Bar

Speed without proof just moves the cost somewhere else. To the reviewer who has to catch what you didn't. To the on-call engineer who finds out at 2am. To the next developer who inherits code that looked right but wasn't.

This is the part of the AI productivity story that gets skipped. When code generation compresses, validation doesn't compress with it. The ratio shifts but the proof requirement doesn't.

Our job as engineers isn't to produce code. It's to deliver outcomes for the people using what we build. Code is the mechanism, not the point. A feature that ships fast and breaks in a real user's hands didn't ship. Including proof that it works is the actual job.


Manual testing first. If you haven't seen the code do the right thing yourself, it doesn't work yet. Finding out from a reviewer or from production isn't the same thing as knowing.

Once you've covered the happy path, start breaking it. Edge cases, error conditions, unexpected inputs. This is a skill AI doesn't have. It can generate tests if you describe what to test, but it can't tell you what you haven't thought to test. That requires someone who understands the system well enough to anticipate how it fails.

Automated testing follows. Easier now than it's ever been. AI generates test scaffolding quickly, so there's no longer a good excuse for skipping it. The bar has moved precisely because the tooling got better.


Watch a senior engineer use AI and it looks like magic. Complete features in minutes, tests included. But look closely at what they're actually doing. They're not accepting output. They're shaping it. They came to the task knowing what good looks like, they're detecting drift when it happens, and they're correcting it. The AI accelerates their implementation. Their judgment is what keeps it sound.

Junior engineers often miss this. They accept output more readily, move faster, and produce what gets called "house of cards code" (it looks complete until real-world pressure is applied).

The difference isn't that one uses AI and the other doesn't. It's that one knows what they want before they start, and can tell when they're not there yet. That's always been what distinguishes senior work. AI didn't raise that bar. It made clearing it harder to fake.


Speed matters because customer outcomes matter. Getting something working in front of a real user faster is genuinely valuable, since that's how you learn what to build next. But "faster" only counts if what you shipped actually works. A faster feedback loop built on broken code isn't a feedback loop. It's noise.

The instinct to move faster is right. The mistake is treating generation speed and validation rigor as a tradeoff. They're not. More output with weak validation means more review burden, more incidents, more debt accumulating faster than anyone can pay it down. More output with strong validation means durable velocity. The discipline is what makes the speed stick.


Part 4 of 14 — What I Think About AI Engineering**

← The Ratios Shift, But the Real Work Stays    You Are the Compiler Operator →