In fast-moving product teams, resources for research, design iteration, and development are always constrained. There are often many research questions, ideas, and potential features, but not all deserve the same investment. The challenge is:
How to decide when to invest heavily in research vs. move fast with design or experimentation
How to avoid spending time on things that are either well understood (so less research needed) or low consequence (so low risk)
How to align prioritization with both user needs & business impact, while keeping cost, risk, and speed in view
At Travelport, as research matured, we faced a new challenge:
not everything needs deep research. Sometimes teams just needed to ship and learn.
Other times, getting it wrong would be expensive.
But how do you decide? Without a shared framework, it became political. The loudest voice or the most senior person would win.
The framework has two axes:
X-axis: Problem Clarity (Do we deeply understand the user need?)
Y-axis: Risk (What happens if we get this wrong?)
Based on where a given research or design task sits along those two axes, we choose one of four quadrants, each implying a different approach. This directly mirrors Pendo’s framework. Pendo.io
Lean UX also encourages similar thinking: forming hypotheses, building minimum viable artifacts, validating assumptions quickly, and iterating. The goal is to reduce wasted work.
Here are the concrete steps we took to apply this prioritization in my organization:
Audit & Mapping: Collected backlog of design/research requests, feature ideas, and hypothesis statements. For each, assessed the current level of problem clarity (evidence we had) and risk (impact if wrong, cost of rework).
Quadrant Assignment: Mapped each task into one of the four quadrants.
Defined Guidelines for each quadrant: which kinds of tasks get which level of effort, who owns them (design, research, product), and what templates or processes to follow.
Lean Hypothesis Workshops: Before starting work, we held workshops where cross-functional teams (PMs, designers, researchers) stated hypotheses, assumptions, what they know vs what they don’t, and rated clarity & risk.
Design / Research Time Allocation: Based on quadrant mapping, allocated resource (researcher time, designer reviews) according to priority and risk. For example, Research Heavy items got scheduling for user interviews & deeper studies; Ship & Measure items were fast prototyped then tested post-launch metrics.
Feedback & Metrics: After implementation, checked outcomes: Did the item perform as expected? If assumptions were incorrect, what rework was needed? Was user satisfaction, usage, or other metrics aligned? Used these to refine future clarity/risk estimates.
We didn't just present this in a slide deck. We ran workshops with product teams to practice using it. Now, when someone asks for research, we start with: "Let's map this on the framework together."
Research hours focused on the 20% of decisions that mattered most. And most importantly? Teams felt empowered to make informed decisions about when to do research, not just how.
More efficient investment: Teams stopped doing "research theater" (going through the motions because it felt like they should). We reduced the time spent doing deep research on low-risk, well-understood items. That freed up researcher/designer time for higher-risk, less-understood problems.
Improved decision quality: Because teams had to articulate what is not known, and what evidence existed, alignment improved; fewer cases where features missed user needs.
Faster iterations: For items in the low-risk / high-clarity quadrant, we shipped faster and measured live, which enabled learning sooner.
Reduced rework: Because higher-risk items were researched first, fewer surprises in later development.
Data-driven prioritization culture: The framework created shared language (risk, clarity) and visibility into why tasks were prioritized. The prioritization became more objective and less dependent on opinion.
Research is about relationships, not reports: The best insights come from trust—with users, with teams, with stakeholders. I spend as much time building those relationships as I do analyzing data.
Estimating clarity and risk isn’t always easy: teams often overestimate clarity or underestimate hidden risks. Calibration and review help.
Keep it lightweight: the mapping/quadrant approach works only if the process doesn’t become a bureaucratic blocker. Use templates, fast workshops.
Use feedback loops: post-launch validation or partial experiments are essential to refine understanding.
Cross-functional participation matters: including PMs, engineers, designers, and researchers together ensures risk & clarity are assessed with multiple points of view.
Not all tasks are equal: sometimes business / regulatory urgency forces you to take a higher risk; in those cases, build in mitigation (e.g. progressive rollout, monitoring, rollback plan).