Mar 8, 2026

5 min read

Your AI Roadmap Is Built Backwards. Here's How to Flip It.

Most AI teams don't build bad roadmaps on purpose. They just start with the wrong question. Here's the framework to flip it.

roadmap

Most AI teams don't build bad roadmaps on purpose.

They just start with the wrong question.

"What can this model do?" feels like the right place to start. It's exciting. There's a demo ready in hours. The team is energized. Product, engineering, leadership are all aligned on a capability.

Three months after launch, usage is flat. The team is confused. The demo was great. The model performed well. So why isn't anyone using it?

Because the roadmap was built backwards.

The Question That Gets Skipped

Most AI roadmaps start with capability. Here's the stack, here's what it can do, here's a workflow we can plug it into.

What they skip is deceptively simple: What is the user doing right now that the product isn't handling?

Not what the AI can automate. Not what the model is good at. What is the user doing manually, outside your product, right now — today — to fill a gap your system leaves open?

That's your product brief. Everything else is implementation detail.

The reason this question gets skipped isn't laziness. "What can AI do?" is just a much easier question to answer. You can demo it. You can show stakeholders. You can get buy-in. "What's actually broken in the workflow?" requires sitting with users, watching them work, asking uncomfortable questions, and often hearing answers that make your original plan feel naive.

So teams skip it. And the roadmap gets built on top of a capability looking for a problem.

The Three Questions That Actually Matter

Before you touch a model, a framework, or an API — answer these three:

1. Where does someone leave your product to do something manually?

This is the exit point. The moment a user opens a spreadsheet, fires off a Slack message, exports a CSV, or emails a colleague something your product should have answered. Every manual exit is a product failure you haven't fixed yet.

Find the exits. Map them. Rank them by frequency and pain.

2. Why can't they do it inside the product?

This question reveals whether you have a data problem, a logic problem, a UX problem, or a genuine AI opportunity. Most of the time it's not AI that's missing — it's a missing integration, an underdocumented edge case, or a workflow assumption that was wrong from the start.

The ones that are genuinely AI opportunities? That's your shortlist.

3. What does "done" look like for the user?

Not "what will the AI output?" What does the user's day look like when this problem is solved? What decision do they make faster? What do they stop doing? What do they trust that they currently don't?

I ask this question in every discovery session I run. It's the one that stalls most often. Teams can describe the output clearly but struggle to describe the behavioral change. That gap — between "here's what the AI will produce" and "here's how the user's work actually changes" — is where most AI features quietly die after launch.

If you can't answer question three concretely, you don't have a product brief. You have a feature hypothesis.

The Workaround Signal

Here's the thing about workflow gaps: users don't wait for you to fix them.

They build workarounds. A spreadsheet that pulls data from three systems. A weekly Slack message that acts as a manual checkpoint. A folder named REPORT_FINAL_FINAL_v3 that two people maintain in parallel because the system can't reconcile them automatically.

Workarounds are the most honest signal in product development. They tell you exactly what the user needs — because the user already built it themselves. It's just fragile, manual, and doesn't scale.

The best AI products I've seen didn't start with a model. They started with a workaround so painful that users had already spent hours trying to automate around it. The AI made the workaround disappear.

Find the workarounds before you write a single line of code. They're your real roadmap.

The Right Order

The backwards roadmap looks like this: capability → use case → user adoption (hoped for).

The right order: workflow gap → workaround → product brief → model selection → build.

Model selection is fourth. Not first. The model is a tool you pick to solve a defined problem — not a starting point you reverse-engineer a problem from.

This sounds obvious when you write it out. It's remarkable how rarely it happens in practice.

The shortcut that trips teams up is the demo. A good AI demo is so compelling that it creates the illusion of product-market fit before a single user has changed their behavior. The team sees the demo and assumes adoption will follow. It usually doesn't — because the demo solved a problem the team found, not a problem the user actually has.

A Check Before Your Next Sprint

Before picking a model or scoping an AI feature, answer these four questions:

  1. What is the user doing manually that they shouldn't have to?

  2. How did you find out — observed, or assumed?

  3. What workaround have they already built?

  4. What does "this problem is solved" look like for them specifically?

Question two is the one that matters most. Observed beats assumed every time. If your answer to any of these is a guess, you're building the roadmap backwards.

The model can wait. The workflow can't.