When I joined Travelport in 2022, the company was a delivery factory. Teams shipped features, moved to the next thing, and rarely looked back to see if what they built actually worked for users. Design was the only function pushing for post-launch measurement and iteration—but without data infrastructure, consistent methodology, or organizational buy-in, those conversations went nowhere.
The CSAT surveys that existed collected 55 responses with surface-level questions that couldn't tell us why scores were low or what to fix. Behavioral analytics didn't exist. Product teams had no idea how users actually interacted with what they'd built. And without measurement, there was no iteration—just endless feature delivery with no sense of whether we were making things better or worse.
The challenge wasn't just implementing tools. It was changing culture: convincing a delivery-focused organization that measuring outcomes matters as much as shipping features.
Hired and deployed behavioral analytics (Fullstory)
We didn't have any behavioral analytics when I started. I made the case to leadership, selected Fullstory as our platform, and worked with engineers to install it across every product. This wasn't just a plug-and-play situation—it required collaboration with engineering teams who were already stretched thin, prioritization conversations about implementation timelines, and education about why this mattered.
Hired and trained a Quantitative UX Researcher
I created our first dedicated quantitative research role and hired someone who could own this work strategically, not just execute surveys. Together, we built the practice: defining what metrics mattered, how to measure them, and how to translate findings into action.
Redesigned CSAT surveys for actionable insights
We didn't just increase sample size (from 55 to 1,100+ responses)—we redesigned the surveys entirely. Better targeting, clearer questions, and analysis that could tell teams what was broken and where to focus. We also integrated AI-powered text analysis in Qualtrics to handle open-ended responses at scale, turning hundreds of comments into thematic insights.
Created tracking dashboards aligned with product OKRs
For each product and major functionality, we built dashboards that tracked the metrics teams actually cared about: task completion, error rates, feature adoption, user satisfaction. These weren't generic reports—they were tailored to each team's objectives and updated continuously.
Trained PMs on defining and tracking metrics
We ran workshops teaching product managers how to define basic metrics, what "good" looks looked like, and when to ask research for help with deeper analysis. The goal wasn't to make everyone a quantitative researcher—it was to make measurement literacy a shared capability.
Established rituals for measurement reporting
We created quarterly review sessions where we presented insights to Product Leadership and the Senior Leadership Team (SLT). This wasn't just about sharing data—it was about making measurement visible at the executive level, reinforcing that this work matters.
Required metrics definition before new product launches
New products now can't launch without defining success metrics upfront. This simple gate forced teams to think about outcomes, not just outputs. What does success look like? How will we know if this worked? These questions became standard, not optional.
Partnered with Data Analytics and CX teams
We created synergies with data analytics (who owned business metrics) and Customer Experience teams (who had support ticket insights). Instead of working in silos, we connected behavioral data with business outcomes and qualitative customer feedback. This gave us the full picture: what users do (behavioral analytics), how they feel (CSAT), why they struggle (qualitative research), and business impact (data analytics).
Collaborated across the organization
Making measurement stick required buy-in from everyone:
Engineers to install survey pop-ups, implement tracking, and use the same tools we used
Program Managers to give us slots in demos and "Measure & Learn" events
Product Managers to define OKRs and create space for follow-up qualitative research when we found issues
Designers to think about metrics during the design process and include them in "definition of done" for Jira tasks
Data Analysts to connect behavioral metrics with business metrics in their reports
Building a measurement culture means making it impossible for teams to ignore whether their work actually helps users. At Travelport, we went from a delivery factory to an organization where measurable OKRs exist and teams define success and experience metrics. That shift didn't happen because we installed Fullstory or Qualtrics—it happened because we made measurement valuable, accessible, and celebrated.
This wasn't a six-month project with a clean ending. It was years of incremental progress, setbacks, and persistence. Early on, teams saw measurement as "extra work" that slowed them down. We had to prove value repeatedly: showing how metrics helped them prioritize, avoid rework, and celebrate wins when things improved.
Some teams embraced measurement immediately. Others resisted for months. We couldn't force cultural change—we had to meet teams where they were, demonstrate value on their terms, and celebrate early adopters who became internal advocates.
Early dashboards showed data but didn't always drive action. We learned that numbers alone don't change behavior: you need clear recommendations, prioritized next steps, and follow-up to ensure insights become action.
Installing Fullstory across legacy products wasn't straightforward. Some products had technical constraints. Some teams had competing priorities. We had to negotiate, compromise, and phase rollouts strategically.
Every step needed collaborators. Want to add a survey pop-up? Need engineering time. Want metrics in OKRs? Need PM alignment. Want designers to care? Need to show how metrics validate their work. Building measurement culture is as much relationship-building as it is methodology.
CSAT improvements:
Product A - Main point of sale: 36% → 65% satisfaction
Product B - Legacy plugin: 15% → 51% satisfaction
Survey responses: 55 → 1,100+ per cycle
Behavioral insights drove product improvements:
Identified drop-off points in key funnels
Improved error messages and error handling
Reduced friction in critical workflows
Organizational adoption:
New products now define metrics before launch (not after)
PMs know how to define basic metrics and when to ask for research help
Experience metrics (CSAT, ease of use, task completion) are tracked as OKRs
Measurement became visible at the executive level
Our CEO references CSAT improvements in company communications. Product Leadership reviews experience metrics quarterly. This isn't a design team initiative anymore—it's how the company evaluates success.
Teams shifted from "did we ship it?" to "did it work?"
The conversation changed. Product reviews now include: What did we learn? What improved? What's still broken? Teams celebrate metric improvements as much as feature launches.
Cross-functional collaboration became standard
Research, Data Analytics, and CX teams now work together routinely. Insights flow across functions. Behavioral data informs qualitative research priorities. Support ticket trends validate quantitative findings.
Measurement as a shared capability
PMs can define and track basic metrics themselves. Designers include measurement in their process. Engineers understand why tracking matters. Research hasn't become less important—we've become more strategic because we're not the only ones who care about measurement.
We started this work in 2022. We're still refining it in 2025. Different teams have different maturity levels. That's okay. Sustainable change happens through persistent, incremental wins—not grand transformations.
Early on, we tried mandating measurement. It didn't work. What worked was finding early adopters, proving value on their products, and letting their success stories convince skeptical teams. Internal advocates are more powerful than top-down mandates.
Teams don't care about CSAT scores abstractly. They care when you show them why the score is low (error messages confuse users) and what they can do about it (here are three quick fixes). Connect metrics to action, always.
Fullstory and Qualtrics are powerful, but they don't change culture by existing. What changed culture was quarterly reviews, demo presentations, Slack posts celebrating improvements, and workshops where teams practiced defining metrics. Tools enable measurement; rituals embed it.
We didn't wait for perfect dashboards or complete behavioral tracking before reporting insights. We started with what we had, improved iteratively, and demonstrated value early. Waiting for perfect would have meant never starting.
Some teams initially feared measurement would expose failures. We reframed it: measurement shows where to improve, celebrates wins, and protects teams from building the wrong things. When teams saw metrics as their tool (not a judgment on their work), adoption accelerated.
Organizations default to shipping features because it feels like progress. Measurement and iteration feel slower, harder, less tangible. Changing that requires leadership commitment, not just research enthusiasm. We succeeded because executives eventually saw measurement as strategic, not operational.
Building measurement culture is as much organizational psychology as methodology. You need the right tools (behavioral analytics, survey platforms, dashboards), but success depends on relationships, rituals, and proving value repeatedly.
Start small, prove value, then scale. We didn't try to transform every team at once. We picked early adopters, demonstrated impact, and let success stories spread organically.
Make measurement a shared capability, not a centralized service. When teams can define and track basic metrics themselves, research becomes strategic (deep dives, complex questions) rather than operational (dashboard maintenance).
Connect behavioral, attitudinal, and business metrics. Behavioral analytics shows what users do. CSAT shows how they feel. Qualitative research explains why. Business metrics show impact. You need all four.
Rituals embed culture more than tools. Quarterly reviews, demo presentations, and Slack celebrations made measurement visible and valued. Tools made it possible; rituals made it stick.
Patience and persistence matter. Culture change takes years, not months. Different teams move at different speeds. That's normal. Keep demonstrating value, celebrating wins, and supporting teams that want to improve.