Skip to content

Where AI Helps (and Where It Doesn't)

The most useful question to ask about any AI coding tool isn't "is it good?" It's: what constraint does it actually relieve?

AI relieves implementation throughput. It's fast at boilerplate, fast at scaffolding, fast at translating a well-specified problem into working code. On greenfield projects (fresh codebase, clean problem, minimal history) that's a lot of the work. You can build something in a day that would have taken a week.

Most engineers aren't on greenfield projects. They're on codebases with years of accumulated decisions, implicit conventions, and business logic that evolved from something else. The context that matters (why a system was designed a certain way, what the implicit rules are, what breaks when you touch a particular component) lives in people's heads. It isn't written down. AI doesn't have it.

That's not a model failure. It's a category mismatch.


The pattern holds across different types of constraints.

When the bottleneck is implementation skill, AI helps a lot. It compresses the work. When the bottleneck is domain knowledge (understanding a complex financial product well enough to model it correctly, knowing why a particular edge case in a 15-year-old system is handled the way it is) AI helps less. It can surface patterns from training data, but it doesn't know your domain the way a tenured engineer does.

When the bottleneck is organizational, AI might not help at all. Regulatory approval takes as long as it takes. Getting three teams to agree on an interface is a people problem, not a coding problem. Producing a working prototype before the actual constraint is resolved doesn't resolve the constraint. Sometimes it creates a new one, because now there's a demo and stakeholders have opinions.

Attaching a code-generating tool to a bottleneck that isn't code generation doesn't unblock you. It just produces more code upstream of it.


There's a specific trap on mature codebases worth naming. The complexity means the model is constantly making changes that are technically plausible but contextually wrong. A senior engineer I know described it plainly: AI can speed up initial engineering time significantly, but that saved time often gets consumed in extended review, fact-checking, and remediation. Net zero. The codebase has nuance the model doesn't know about, and catching its mistakes takes real attention.

Greenfield is genuinely different. The context gap is smaller because there isn't much context yet. Constraints are mostly implementation constraints, which is where AI is strongest. This is where a lot of the dramatic productivity stories come from (someone builds an entire working service in a day), and those stories are real. It's just not the situation most developers are in most of the time.


Before reaching for AI on any task, it's worth asking what's actually slowing you down. If the answer is typing speed or implementation volume, AI will compress that. If the answer is that you don't fully understand the problem, that the domain is unclear, or that the real obstacle is a conversation you haven't had yet, AI can't fix any of that. It'll give you something that looks like progress while the actual problem waits.


Part 2 of 14 — What I Think About AI Engineering**

← AI Is an Amplifier, Not a Replacement    The Ratios Shift, But the Real Work Stays →