title: "Three Things I Keep Coming Back To" date: 2026-04-01 authors: - anveo description: "After all the frameworks, three patterns keep reasserting themselves: context is the ceiling, validation is the bottleneck, and seniority is the multiplier." slug: three-things tags: [ai-engineering, software-development] series: "What I Think About AI Engineering" series_order: 10 notion_sources: - "On AI Acceleration & Amplification: https://www.notion.so/3298f83c9129801eb6bbfdd82a442900" draft: false
Three Things I Keep Coming Back To
Context is the ceiling. The limiting factor in any AI-assisted engineering session isn't model capability or prompt quality. It's how much the model actually knows about the system it's working in.
The model might know everything about programming in general and nothing about your system specifically. That gap is where sessions fall apart. Why does this component work the way it does? What implicit rule governs this service? What's the business constraint behind this particular decision? None of that is written down anywhere a model can find. It lives in the heads of the engineers who built the thing, accumulated over years of decisions that didn't feel like decisions at the time.
The investment that pays compound returns isn't better prompting. It's making that context explicit: documented conventions, clear invariants, files that encode how the system works and why. AI with deep context performs like a different tool than AI without it.
Validation is the bottleneck. Several pieces in this series have circled this point from different directions, because it keeps being the thing that's actually constraining teams. AI accelerated code generation before most teams built the validation muscle to keep pace with it.
It was always about the edge cases, verification, and catching regressions. Experienced engineers knew this. "Coding is 20% of the work" is a cliché precisely because it's true and people keep forgetting it. AI just made forgetting it more expensive.
The bottleneck won't be fixed by better models. A more capable model generates more code, which requires more validation, which deepens the bottleneck. The teams that solve it invest in test infrastructure, review culture, and the discipline to treat unproven code as unfinished work. Those investments existed before AI. They're just more urgent now.
Seniority is the multiplier. This one cuts against a narrative worth pushing back on: that AI is an equalizer, that a junior developer with good tooling can produce what a senior would.
In a narrow sense, sometimes true. In a broader sense, it misses where the value actually sits. A senior engineer gets more out of AI not because they're faster with the tools but because they bring context the tools don't have. They know what they want before asking. They catch drift when it happens. Their usefulness is augmented by knowing when the model is wrong, which requires the kind of judgment that only comes from building things, breaking them, and learning what failure looks like up close.
That's not automatable. It's the product of experience, and AI doesn't manufacture experience.
Part 10 of 14 — What I Think About AI Engineering**