Introduction #
I was recently working on an Agent product, and halfway through writing the PRD, I realized something: the old approach felt like using a screwdriver to hammer a nail. Not completely useless, but definitely not the right tool. Feature lists, page structures, interaction flows — the usual suspects were all there, but they couldn’t cover what an Agent product actually needs to define.
So I scrapped the entire PRD and rewrote it from scratch, shifting the focus from “describing features” to “describing decisions.” This post is about why I changed my approach and what the new structure looks like.
Where Traditional PRDs Fall Short #
The fundamental assumption behind traditional PRDs is that system behavior is predictable — you define the transitions between states and you’re done. That works well for deterministic products, but Agent products are inherently uncertain: the same user input can lead to completely different processing paths depending on context.
There are a few obvious blind spots:
Intent understanding is left undefined. In traditional products, users express intent through buttons, menus, and forms — intent is explicit. In Agent products, users speak in natural language, and intent is implicit1. “Check last month’s data” could mean a summary overview, a month-over-month comparison, or an export for a board meeting. If you don’t map out the intent space first, everything downstream is built on sand.
Tool invocation conditions are missing. Many Agent PRDs list a string of capabilities: “supports knowledge base retrieval, search, ticketing system integration.” But what actually determines the user experience is when to invoke each tool, in what order, and what happens on failure2. Tool invocation is itself a product decision — you can’t just leave it to the engineering team to figure out.
Boundary conditions are treated as edge cases. In traditional products, error handling is an afterthought — you add a section at the end of the doc and call it done. But in Agent products, insufficient information, tool failures, data conflicts, and hallucination risks aren’t rare edge cases — they happen every day as part of the main flow3. Treating them as “exception handling” is a fundamental misjudgment.
A Different Approach: Start with the Decision Flow #
Once I recognized these issues, I changed the starting point of my PRD from “feature modules” to “decision flow.”
The core idea: after a user says something, the system first checks whether the intent is clear — if not, it asks for clarification. If the intent is clear, it checks whether external tools are needed. After calling tools, it checks whether the action is high-risk — if so, it requires user confirmation. The entire chain is a series of decision nodes.
I sketched this logic with Mermaid4:
graph TD
A([User Input]) --> B{Clear intent identified?}
B -- No --> C[Ask for clarification]
B -- Yes --> D{External tool needed?}
D -- No --> E[Generate result directly]
D -- Yes --> F[Call knowledge base / search / business system]
F --> G{High-risk action?}
G -- Yes --> H[Request user confirmation]
G -- No --> I[Execute and return result]
This isn’t a universal template — different business scenarios will need different branches and conditions. But it gives the PRD a backbone: every diamond node is a decision rule that needs to be clearly defined, and every rectangle is a behavior that needs to be precisely described.
How I Write Intent Breakdowns #
I use a structured table for intent breakdowns, not just a list of intent names. Each intent covers at least these fields: intent name, typical expression, actual goal, priority, whether automatic execution is allowed, whether secondary confirmation is required.
Take a real example. The single input “I can’t log in” hides at least four intents: forgotten password, locked account, new employee without access, and SSO misconfiguration. Each requires a completely different processing path. If the PRD just says “automatically identify issue category and create a ticket,” the engineering team is left guessing.
After mapping intents, there’s another important piece: what to do when information is insufficient. A user might express two requests in one sentence, or leave out critical details entirely. Do you ask once and then give your best answer? Do you keep asking until you have enough? If two rounds of clarification still aren’t enough, do you degrade gracefully or return an error? These rules need to be locked down upfront, not figured out in production.
How I Write Tool Invocation Rules #
I don’t write tool invocation as a capability checklist — I write it as conditional logic:
- Does the user’s question involve internal data? If yes, check the knowledge base first; if no, answer directly.
- Does the knowledge base return enough information to answer the question? If not, supplement with search results.
- Does the request involve write operations (create, modify, delete)? If so, user confirmation is mandatory.
- What happens on tool timeout or error — degrade gracefully, return an error, or save a draft for the user to retry?
- When multiple tools return contradictory results, which data source takes priority?
Each of these conditions is a product decision. The more explicit you are, the less the implementation drifts, and the fewer arguments you have post-launch.
Every additional tool invocation adds latency and a new failure point. More tools doesn’t mean a better product — if anything, the principle should be “don’t invoke unless necessary.” There’s no point routing a simple question through three tools just to produce an answer that’s worse than a direct response.
How I Write Boundary Conditions #
My current approach is to treat boundary conditions as equal in importance to the main flow — not as a supplementary section at the end of the document, but as rules embedded into each decision node.
Specifically, I focus on these categories:
- Insufficient information: What’s the clarification strategy, and how to handle it after N rounds of failed clarification
- Tool failure: Degradation plan for each tool
- High-risk actions: Which operations require confirmation, which can execute automatically, and which must never execute without explicit approval
- Data conflicts: When multiple sources contradict each other, which one to trust — or whether to surface the conflict to the user
- Hallucination control: Whether the model is allowed to generate content without sufficient evidence, and how to inform the user if not
These rules may seem tedious to write, but they’re the line between a controllable Agent and an uncontrollable one.
The Complete PRD Structure #
After going through this process, here’s the structure I use for Agent PRDs:
Scenario definition: Why does this scenario need an Agent instead of a traditional feature? If a task has extremely stable rules, highly structured inputs, and doesn’t require multi-round judgment, it probably doesn’t need an Agent3. This section filters out a lot of pseudo-requirements.
Intent breakdown: A structured table listing possible intents for each scenario, along with trigger conditions, priorities, and handling methods.
Decision paths: The expanded version of the Mermaid diagram above — each decision node with clearly defined conditions and branch directions.
Tool invocation rules: Trigger conditions, prohibition conditions, call order, and failure degradation.
Boundary conditions: Equal in importance to the main flow, embedded into each node rather than in a separate chapter.
Result definition: What counts as complete, partially complete, and failed-but-with-usable-intermediate-results.
Evaluation metrics: Intent recognition accuracy, tool invocation success rate, task completion rate, human takeover rate, result adoption rate — not just DAU.
Closing Thoughts #
Looking back, the fundamental difference between traditional PRDs and Agent PRDs is this: traditional PRDs describe how a system behaves across different states; Agent PRDs describe how a system acts across different decisions. One is about static pages and flows; the other is about dynamic intents and decisions.
Once you understand that, the writing approach naturally changes. If you’re working on an Agent product, try running your PRD through this checklist: are the intents clearly broken down? Are tool invocation conditions defined? Are boundary conditions covered? If you can’t answer one of these, the problem probably isn’t the document — it’s that the product design itself isn’t fully thought through yet.