Strategy in the Age of LLM Wrappers
Most so-called "AI-powered" products follow the same pattern: take a task someone does manually, write a prompt that automates it, wrap it in a clean UI, and ship. Some of these are genuinely useful. Most aren't defensible.
The tell is simple. If your entire product could be replicated by a developer with an API key and a weekend, it's not a product. It's a demonstration.
The dependency chain of a typical AI wrapper runs like this: the product sits on top of OpenAI, which runs on Azure, which runs on NVIDIA. Nobody in that chain except NVIDIA is difficult to displace. The wrapper is the most exposed position in the stack. That's where most of what's being built right now lives.
This doesn't mean you can't build a real business on top of foundation models. It means the business has to be about something other than the model. The model is infrastructure. What you build on top of it, and for whom, is the actual question.
The moats that hold up tend to come from assets the model can't provide. Proprietary data is the most discussed: a model fine-tuned on your company's historical interactions, domain corpus, or customer behavior does things a general-purpose model can't replicate. The data is the asset.
Less discussed, and harder to copy, is deep customer outcome knowledge. Knowing not just what your customers do but why, what they're actually trying to accomplish, and where the friction is between their current state and that goal. That takes years of proximity to the problem. It can't be prompted into existence.
Regulatory lock-in and becoming a program of record are the quieter moats. If your product is the system of record for a compliance workflow, or switching away requires a regulatory re-certification, you have durability that has nothing to do with model capability.
The framing shift that matters here is from Minimum Viable Product to Minimum Productive Outcome. MVP made sense when the bottleneck was shipping: build the smallest thing that demonstrates the idea, learn, iterate. When AI compresses build time, shipping fast enough to learn is no longer the constraint.
Minimum Productive Outcome asks a different question: what's the smallest thing that produces a real result for the customer? Not a demo. Not a prototype. An outcome. The success metric moves from "did we ship" to "did it work for the person using it."
On pricing: the race to be cheapest in a commoditized market is a losing strategy in any industry. Pricing power comes from solving substantial problems or saving meaningful time. Customers will pay for that. The market is consistent on this point regardless of what the underlying model costs.
The products that matter in five years won't be the ones with the best prompt engineering. They'll be the ones that accumulated proprietary data, developed genuine depth on a customer problem, and built something a user would feel the cost of losing. None of that is prompt-dependent.
Part 7 of 14 — What I Think About AI Engineering**