- what the warbler knows
- Posts
- The Customer Insight to Product Feature Pipeline: A Chief Evangelist's Framework
The Customer Insight to Product Feature Pipeline: A Chief Evangelist's Framework
Translating market research into actionable product decisions
Guest author: Arianne Walker, Chief Evangelist, Amazon Smart Vehicles
Hey Warblers,
Most product teams are flying blind.
They have terabytes of user data, mountains of feature requests, and endless stakeholder opinions. But when it comes to answering the fundamental question, "What should we build next?", they're either paralyzed by analysis or simply choose to build whatever's most feasible.
My team faced this exact problem at Amazon. The automotive industry was shifting rapidly - electric vehicles, autonomous features, connected services. Customer expectations were evolving faster than product roadmaps could keep up.
We'd conducted over a dozen user studies in the past few years. Customer interviews, focus groups, usability testing, competitive analysis. Thousands of insights scattered across Slack channels, emails, and docs.
But when leadership asked, "Do we have product-market fit?" the answer was uncomfortably vague.
Over the course of a year, we transformed what we knew from customer research into a data-driven product roadmap that helped the team prioritize what actually mattered to end customers. We finally knew what to build and why.
Our secret? A systematic pipeline that turns scattered customer insights into ranked product features.
The Research Reality Check
Our wake-up call came during a weekly business review. The VP asked a simple question: "Based on all our customer research, what's the single most important thing we should build next?"
The room went silent.
We'd been so focused on what our automotive partners wanted and needed that we hadn't done systematic accounting of what end customers actually needed. We'd spent more time talking about registration pain points than focusing on features customers were dying to have in their vehicles.
The problem wasn't lack of research. It was the absence of a systematic way to translate insights into product strategy. Most companies treat customer research as an information-gathering exercise rather than a decision-making tool.
Traditional market research tells you what happened. The Jobs-to-be-Done framework tells you what to build next.
The Jobs-to-be-Done Discovery Process
We reframed the fundamental question. Instead of "What do customers want?" we asked "What job are customers trying to get done?"
In the automotive context, this shift was revelatory. Customers didn't want "better infotainment systems", they wanted to "stay connected to work while commuting safely." They didn't want "more autonomous features", they wanted to "get to their destination without stressing about traffic."
This approach led to three phases, each building on the previous one.
Phase 1: Mining Existing Research
Rather than starting fresh, we systematically reviewed two years of customer studies. But instead of looking for feature requests or pain points, we looked for underlying jobs. We created a customer experience outcome framework that let us identify the job-to-be-done, pull evidence from existing research, and map it to potential tasks (or shorthand for the job-to-be-done).
We found feedback like: "When I'm driving to a new restaurant, I want to find it quickly without getting distracted, so I can focus on driving safely and arrive on time." This helped us identify the core job: get to their destination safely and on time.*
From another study: "When I get in my car on a hot day, I want the temperature to be comfortable immediately, so I can focus on my commute rather than adjusting settings." The underlying job: be comfortable in my vehicle.
This re-analysis revealed over 30 distinct jobs customers were trying to accomplish while driving. The team had identified features before, but not the jobs-to-be-done. This shift allowed us understand customer needs without immediately jumping to technological solutions. It also meant we didn't assume current technology was actually supporting the jobs-to-be-done.
Phase 2: Targeted Research for Gap Areas
The mining process also revealed jobs that seemed important but lacked sufficient research evidence. We designed targeted studies to explore these gaps, always framing questions around jobs-to-be-done rather than features.
Instead of asking "What new features would you like?", we said, "Tell me about a drive where you wanted to get work done while on the go."
This targeted research uncovered additional jobs, bringing our total to over 40 distinct customer needs.
Phase 3: Language Validation
To prepare for quantitative validation, we needed customer-friendly language rather than internal jargon. We might talk about "communicating with people outside the car" while customers described it as "making a phone call or sending a text message."
We conducted language validation sessions with over 30 customers, testing job descriptions to ensure our quantitative measurement would actually measure what we thought it was measuring.
This validation refined job statements into customer-friendly language that could be used for more quantitative measurement.
The Gap Analysis System
With over 40 validated jobs identified, we needed a systematic way to find the biggest opportunities.
Importance Research
We surveyed over 1,000 customers, asking them to rate how frequently they tried to complete each job while in their vehicles.
While there were a few surprises, jobs generally fell where we expected. But now we had quantitative measurement instead of just gut instinct.
Satisfaction Research
For each job-to-be-done, customers rated how satisfied they were with their current ability to accomplish it in the car.
Here we found real surprises. Customers were more satisfied in some areas than expected and less satisfied in others, helping us rethink where opportunities for improvement actually existed.
The Opportunity Matrix
We created a matrix with satisfaction on the X-axis and frequency on the Y-axis. Jobs in the upper-left quadrant - high frequency, low satisfaction - represented the biggest product opportunities.
This process identified high-opportunity jobs-to-be-done that current technology wasn't solving well. Looking at data this way let us be clear and specific about jobs the product team should prioritize. Several of these were jobs we'd never specifically targeted, despite years of customer research.
The Feature Prioritization Model
Raw opportunity scores alone weren't enough to build a roadmap. Once we generally described features that would solve high-priority jobs-to-be-done, we needed to factor in development complexity, strategic fit, and competitive differentiation.
We created a weighted scoring model combining customer opportunity with business feasibility. Below includes example weights, but these can be adjusted to your organizational needs.
Customer Impact Score (40% weight): Based on frequency rating and satisfaction gap
Development Feasibility (25% weight): Engineering effort and technical risk assessment
Strategic Alignment (20% weight): Fit with product vision, competitive positioning, and automaker input
Resource Requirements (15% weight): Team capacity and budget constraints
Each potential feature received scores across all four dimensions, creating a composite priority score. This prevented either pure customer demand or pure engineering preferences from dominating roadmap decisions.
The Validation Loop
While jobs-to-be-done should remain relatively stable over time, we knew we needed to measure across customer cohorts in different countries and repeat the measurement annually to understand when frequency or satisfaction changed.
Your Customer Insight Pipeline Action Plan
This isn't a quick fix. You and your team have to commit to this type of planning. But, you can start today.
This Week: Identify a cross-organizational team that can help audit your existing customer research for jobs-to-be-done insights.
This Quarter: Carve out time to identify customer jobs to be done in your product area and spot any suspected jobs that lack sufficient research validation.
Next Quarter: Conduct additional research on suspected jobs to fill gaps in understanding, then run language validation with 20-30 customers on your job statements.
The Following Quarter: Set up frequency vs. satisfaction tracking surveys and survey a larger customer base, then create an opportunity matrix plotting frequency vs. current satisfaction and discuss what features might solve priority jobs.
In Time for Annual Planning: Run feature prioritization using weighted scoring, build a comprehensive feature pipeline based on opportunity analysis, and don't forget to continue this loop annually.
The Uncomfortable Truth
You may have more customer data than ever, but less clarity about what actually matters. Every new study adds information but doesn't always add insight. Every feature request gets equal weight regardless of underlying importance, or gets weighted by different measures..
Your customer insights are worthwhile, but without a systematic pipeline to translate insights into decisions, they remain insights without action.
This is your opportunity to discover how to solve the jobs customers care most about solving. Start building the pipeline that turns research into roadmap reality.
Your customers are trying to get important jobs done. Help them, and they'll help your business succeed.
– Warbler (with Arianne Walker)
*No actual customer quotes are included due to privacy, but examples represent the type of feedback we heard.