Clawfu-skills product-discovery
Build products customers actually want. Apply Marty Cagan's Silicon Valley-tested framework to discover solutions that are valuable, usable, feasible, and viable. Use when: **New product development** when validating what to build; **Feature prioritization** to ensure you're solving real problems; **Pivot decisions** when current direction isn't working; **Team alignment** on what problems to solve; **Risk reduction** before committing development resources
git clone https://github.com/guia-matthieu/clawfu-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/guia-matthieu/clawfu-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/product/product-discovery" ~/.claude/skills/guia-matthieu-clawfu-skills-product-discovery && rm -rf "$T"
skills/product/product-discovery/SKILL.mdProduct Discovery
Build products customers actually want. Apply Marty Cagan's Silicon Valley-tested framework to discover solutions that are valuable, usable, feasible, and viable.
When to Use This Skill
- New product development when validating what to build
- Feature prioritization to ensure you're solving real problems
- Pivot decisions when current direction isn't working
- Team alignment on what problems to solve
- Risk reduction before committing development resources
- Continuous discovery to maintain product-market fit
Methodology Foundation
| Aspect | Details |
|---|---|
| Source | Marty Cagan - Inspired (2008, 2018) and Empowered (2020) |
| Core Principle | "Fall in love with the problem, not the solution. The best product teams discover what customers need, not just what they ask for." |
| Why This Matters | Most products fail not because they're built poorly, but because they solve the wrong problem. Discovery ensures you build the right thing before you build the thing right. |
What Claude Does vs What You Decide
| Claude Does | You Decide |
|---|---|
| Structures content frameworks | Final messaging |
| Suggests persuasion techniques | Brand voice |
| Creates draft variations | Version selection |
| Identifies optimization opportunities | Publication timing |
| Analyzes competitor approaches | Strategic direction |
What This Skill Does
- Frames the four risks - Value, usability, feasibility, viability
- Distinguishes discovery from delivery - Different mindsets, different processes
- Teaches opportunity assessment - Which problems to solve
- Develops prototyping skills - Test ideas before building
- Guides customer research - Learn what customers need (not want)
- Structures continuous discovery - Ongoing learning, not one-time research
How to Use
Assess a Product Opportunity
I'm considering building [feature/product]. Apply product discovery principles to assess this opportunity. Context: [target customer, current state, hypothesis]
Reduce Risk Before Building
We're about to build [feature]. Help me identify the key risks and design tests to address them.
Set Up Continuous Discovery
I want to implement continuous discovery for my product team. Help me design a weekly discovery rhythm.
Instructions
Step 1: Understand the Four Risks
## The Four Product Risks Every product idea has four risks to address BEFORE building: ### 1. Value Risk "Will customers buy/use this?" **Questions:** - Does this solve a real problem? - Is the problem painful enough to pay/switch for? - Will users actually adopt this? **Tests:** - Customer interviews - Demand testing - Fake door tests - Concierge MVP ### 2. Usability Risk "Can customers figure out how to use it?" **Questions:** - Is it intuitive? - Can users accomplish their goals? - What's the learning curve? **Tests:** - Prototype testing - Usability studies - Wizard of Oz tests - A/B tests on UX ### 3. Feasibility Risk "Can we build this?" **Questions:** - Do we have the technology? - Can we do it in reasonable time? - What are the technical dependencies? **Tests:** - Technical spike - Proof of concept - Architecture review - Build vs. buy analysis ### 4. Viability Risk "Should we build this?" **Questions:** - Does it fit our strategy? - Can we support/maintain it? - Is it legal/compliant? - Does the business model work? **Tests:** - Business case - Stakeholder review - Compliance review - Financial modeling
Step 2: Discovery vs. Delivery
## Two Tracks: Discovery and Delivery ### Discovery (Figure out WHAT to build) **Mindset:** - Embrace uncertainty - Test assumptions - Fail fast and cheap - Learn over deliver **Activities:** - Customer interviews - Prototyping - Experiments - Opportunity assessment **Outcome:** - Validated problems - Tested solutions - Confidence to build - Clear success metrics ### Delivery (BUILD it right) **Mindset:** - Reduce uncertainty - Execute efficiently - Ship quality - Hit timelines **Activities:** - Engineering - QA - Launch prep - Documentation **Outcome:** - Working software - Happy customers - Business impact - Technical quality ### The Critical Point Most teams skip discovery and jump to delivery. **Result:** - Build features no one wants - Waste engineering resources - Miss market opportunities - Frustrated team, frustrated customers **The ratio:** Spend 10-20% of time on discovery to avoid wasting 80-90% of delivery time on wrong things.
Step 3: Opportunity Assessment
## Assessing Product Opportunities ### The Opportunity Assessment Framework Before committing to solve a problem, answer: **1. Is this problem worth solving?** | Factor | Questions | |--------|-----------| | **Frequency** | How often does this problem occur? | | **Intensity** | How painful is it when it happens? | | **Willingness** | Will people pay/switch to solve it? | | **Reach** | How many customers have this problem? | **Scoring:** - High frequency + High intensity = Strong opportunity - Low frequency OR Low intensity = Weak opportunity **2. Can we solve it effectively?** | Factor | Questions | |--------|-----------| | **Capability** | Do we have the skills/tech? | | **Fit** | Does it align with our strategy? | | **Uniqueness** | Can we solve it better than alternatives? | | **Sustainability** | Can we maintain competitive advantage? | **3. Should we solve it now?** | Factor | Questions | |--------|-----------| | **Urgency** | Is timing critical? | | **Resources** | Do we have capacity? | | **Dependencies** | What else needs to happen first? | | **Opportunity cost** | What are we NOT doing instead? | ### Opportunity Score Card
Opportunity: [Name]
Problem Assessment
- Frequency: [1-5]
- Intensity: [1-5]
- Willingness to pay/switch: [1-5]
- Market size: [1-5] Problem Score: [Average]
Solution Assessment
- Technical feasibility: [1-5]
- Strategic fit: [1-5]
- Competitive advantage: [1-5] Solution Score: [Average]
Timing Assessment
- Urgency: [1-5]
- Resource availability: [1-5] Timing Score: [Average]
Overall: [Problem × Solution × Timing = X]
Recommendation: [Pursue / Park / Pass]
Step 4: Discovery Techniques
## Core Discovery Techniques ### 1. Customer Interviews **Purpose:** Understand problems, not validate solutions **Structure:** 1. Context: Understand their current situation 2. Problem: Explore the pain points 3. Impact: How does it affect them? 4. Current solutions: What do they do today? 5. Ideal state: What would "solved" look like? **Key rules:** - Ask about past behavior, not future intentions - Don't pitch, just listen - Follow the emotion - Get specific stories **Questions:** - "Walk me through the last time this happened..." - "What did you do? What happened next?" - "Why was that a problem?" - "What would have made it better?" ### 2. Prototyping **Purpose:** Test solutions before building **Types:** | Type | Fidelity | Tests | Time | |------|----------|-------|------| | **Paper sketch** | Low | Concepts, flow | Hours | | **Wireframe** | Low-Med | Structure, navigation | Days | | **Clickable prototype** | Medium | Usability, flow | Days | | **Wizard of Oz** | High | Full experience | Weeks | **Principle:** Use the lowest fidelity that tests your hypothesis. Higher fidelity = More time = More risk of attachment. ### 3. Experiments **Purpose:** Test assumptions with real behavior **Types:** - **Fake door:** Button for feature that doesn't exist - **Smoke test:** Landing page before building - **Concierge:** Manual delivery of automated value - **A/B test:** Compare variations with real users **Structure:** 1. Hypothesis: "We believe [X]" 2. Test: "We will test by [Y]" 3. Metric: "We will measure [Z]" 4. Success: "[Number] indicates we should proceed" ### 4. Opportunity Solution Trees **Purpose:** Map problem space to solution space **Structure:**
[Desired Outcome] | ┌──────────────┼──────────────┐ | | |
[Opportunity 1] [Opportunity 2] [Opportunity 3] | | | ┌─┴─┐ ┌─┴─┐ ┌─┴─┐ | | | | | | [S1] [S2] [S1] [S2] [S1] [S2]
- Start with business outcome - Break into opportunities (problems to solve) - Brainstorm solutions for each opportunity - Test solutions, not opportunities
Step 5: Continuous Discovery
## Weekly Discovery Rhythm ### The Cadence **Monday: Prep** - Review last week's learnings - Prioritize this week's questions - Schedule interviews/tests **Tuesday-Thursday: Research** - Customer interviews (2-3 per week minimum) - Prototype testing - Data analysis - Experiment monitoring **Friday: Synthesis** - Consolidate learnings - Update opportunity assessment - Share with delivery team - Plan next week ### The Habits **1. Weekly customer touchpoint** - Minimum: 2-3 customer conversations/week - Mix: Prospects, users, churned customers **2. Assumption tracking** - List key assumptions - Design tests for riskiest ones - Document learnings **3. Experiment backlog** - Always have 2-3 experiments running - Quick iterations over perfect tests **4. Cross-functional involvement** - Engineering in discovery - Design in customer calls - PM facilitates, doesn't dictate ### The Anti-Patterns **Discovery theater:** - Doing research but not changing decisions - Fix: Tie discovery directly to roadmap decisions **Big bang research:** - Quarterly research sprints instead of continuous - Fix: Small, frequent over large, infrequent **Validation bias:** - Only seeking confirmation - Fix: Actively try to disprove hypotheses
Examples
Example 1: New Feature Discovery
Input:
"We're thinking of adding a calendar integration to our project management tool. How do I validate this before we build it?"
Output:
Discovery Plan: Calendar Integration
Risk Assessment
| Risk | Level | Reason |
|---|---|---|
| Value | HIGH | Unclear if this solves a real problem |
| Usability | MEDIUM | Calendar UI is complex |
| Feasibility | LOW | Known integrations (Google, Outlook) |
| Viability | LOW | Fits product strategy |
Focus discovery on: Value risk (first), then Usability
Discovery Activities
Week 1: Understand the Problem
Customer Interviews (5-8 interviews)
Target: Current users who manage time-sensitive projects
Questions:
- "Walk me through how you schedule work for your team right now."
- "What happens when deadlines slip? How do you find out?"
- "What role does your calendar play in project management?"
- "What's frustrating about keeping projects and calendars in sync?"
- "Show me the last time you missed something because of calendar/project disconnect."
What you're listening for:
- Is there a real pain? How intense?
- What workarounds do they use today?
- Is this a "nice to have" or "need to have"?
Week 2: Assess Demand
Fake Door Test
- Add "Calendar Sync (Coming Soon)" button to settings
- When clicked: "Thanks for your interest! Join the waitlist."
- Track: Click rate, waitlist conversions
Success criteria:
-
5% of MAU click the button = Strong signal
- 2-5% = Moderate signal
- <2% = Weak signal
Existing data analysis:
- How many users mention "calendar" in support tickets?
- What integrations do users currently connect?
- Are users with calendar tools more or less engaged?
Week 3: Test Solutions
Prototype Test (if Week 1-2 show signal)
Low-fidelity prototype showing:
- Calendar sync setup flow
- How synced events appear
- Conflict resolution
Test with 5 users:
- Can they set it up without help?
- Does it match their mental model?
- What's confusing?
Wizard of Oz Test
For 10 users:
- Manually sync their calendar items to projects
- Send them a summary email daily
- Interview after 1 week: Was this valuable?
Decision Framework
| Finding | Decision |
|---|---|
| Strong value signal + usable prototype | Build it |
| Strong value signal + usability issues | Redesign, retest |
| Weak value signal | Don't build, park idea |
| Value for subset only | Consider scope/segmentation |
Timeline
- Week 1: Interviews → Problem understanding
- Week 2: Fake door + data → Demand validation
- Week 3: Prototype → Solution validation
- Week 4: Synthesize → Build decision
Total discovery investment: 4 weeks, ~40 hours vs. building without discovery: 3-6 months of engineering
Example 2: Continuous Discovery Setup
Input:
"I'm a PM at a B2B SaaS startup. We don't do any formal discovery—we just build what customers ask for or what the CEO wants. How do I start continuous discovery?"
Output:
Continuous Discovery Kickstart
Start Small, Build Habits
Don't try to:
- Redesign your entire process
- Stop all current development
- Hire a research team
Do:
- Add one discovery habit per week
- Start with what you control
- Show value quickly
Week 1: Customer Conversation Habit
Action: Schedule 2 customer calls for next week.
Who to talk to:
- 1 customer who recently churned or downgraded
- 1 customer who recently upgraded or is highly engaged
Script: "We're working to make [product] better for people like you. I'd love 20 minutes to understand how you're using it and what we could improve. Not a sales call—just learning."
After each call: Write 3-5 bullet points:
- What problem were they trying to solve?
- What's working? What's not?
- What surprised me?
Week 2: Assumption Tracking
Action: For any feature in development, list the top 3 assumptions.
Example format:
Feature: New onboarding flow Assumptions: 1. Users don't complete onboarding because it's too long 2. Users who complete onboarding retain better 3. Users want to invite teammates during onboarding Evidence level: 1. Assumption (no evidence) 2. Validated (we have data) 3. Assumption (no evidence)
Share with team: "Here are our assumptions. Which are we most uncertain about? How could we test them?"
Week 3: Add Interview Question
Action: Add one discovery question to every customer call (support, sales, success).
The question: "What's the biggest challenge you're facing right now that we don't currently help with?"
Collect answers: Shared doc/Slack channel where team posts responses.
Weekly review: "We talked to 8 customers. Here's what we heard about challenges..."
Week 4: Small Experiment
Action: Run one small experiment.
Simple options:
- Survey in-app about a proposed feature
- Fake door for feature idea
- A/B test on onboarding
Share results: "We tested [X]. Here's what we learned: [Y]. This changes our thinking about [Z]."
Building Executive Buy-In
Don't ask for permission. Show value.
After 4 weeks, share:
- "We talked to 8 customers. Here's what we learned."
- "We ran one experiment. It told us [X]."
- "This prevented us from building [feature] that data shows customers don't actually need."
The pitch: "I'm not asking for a big research investment. I'm spending 2-3 hours per week talking to customers and running small tests. It's already helping us prioritize better."
Scaling Up
Once habits are established (4-8 weeks):
- Formalize interview cadence: 2-3/week team standard
- Create opportunity backlog: Problems we might solve
- Link to roadmap: Discovery informs what we build
- Involve engineering: Engineers in customer calls
Example 3: Opportunity Assessment
Input:
"We have three potential features to build: (A) AI-powered suggestions, (B) Team collaboration improvements, (C) Mobile app. How do I assess which to prioritize?"
Output:
Opportunity Assessment: Feature Prioritization
Opportunity A: AI-Powered Suggestions
Problem Assessment
| Factor | Score | Reasoning |
|---|---|---|
| Frequency | 3 | Daily use case |
| Intensity | 2 | Nice to have, not painful without |
| Willingness to pay | 2 | Market expects AI, but is it differential? |
| Market size | 4 | Applies to most users |
| Problem Score | 2.75 |
Solution Assessment
| Factor | Score | Reasoning |
|---|---|---|
| Feasibility | 3 | ML expertise needed, but doable |
| Strategic fit | 4 | Aligns with "intelligent product" vision |
| Competitive advantage | 2 | Easy for others to copy |
| Solution Score | 3.0 |
Key Risks:
- Value: Will suggestions be good enough to use?
- Feasibility: Do we have ML talent?
- Viability: Training data requirements, cost
Opportunity Score: 8.25
Opportunity B: Team Collaboration
Problem Assessment
| Factor | Score | Reasoning |
|---|---|---|
| Frequency | 5 | Multiple times daily for teams |
| Intensity | 4 | Current friction causing workarounds |
| Willingness to pay | 4 | Team pricing tier exists |
| Market size | 3 | Only applies to team accounts |
| Problem Score | 4.0 |
Solution Assessment
| Factor | Score | Reasoning |
|---|---|---|
| Feasibility | 4 | Standard features, known patterns |
| Strategic fit | 5 | Directly supports growth strategy |
| Competitive advantage | 3 | Differentiation possible but not huge |
| Solution Score | 4.0 |
Key Risks:
- Value: Which specific collaboration features matter?
- Usability: Team features can get complex quickly
Opportunity Score: 16.0
Opportunity C: Mobile App
Problem Assessment
| Factor | Score | Reasoning |
|---|---|---|
| Frequency | 3 | Some users want mobile, most desktop |
| Intensity | 4 | Mobile users are very frustrated |
| Willingness to pay | 2 | Expectation, not premium feature |
| Market size | 2 | Only subset of users need mobile |
| Problem Score | 2.75 |
Solution Assessment
| Factor | Score | Reasoning |
|---|---|---|
| Feasibility | 2 | Major effort (iOS + Android + maintain) |
| Strategic fit | 3 | Not core to current positioning |
| Competitive advantage | 2 | Table stakes, not differential |
| Solution Score | 2.33 |
Key Risks:
- Viability: Ongoing maintenance cost
- Feasibility: Native vs. cross-platform decisions
Opportunity Score: 6.4
Summary & Recommendation
| Opportunity | Problem | Solution | Score | Rank |
|---|---|---|---|---|
| Team Collaboration | 4.0 | 4.0 | 16.0 | 1st |
| AI Suggestions | 2.75 | 3.0 | 8.25 | 2nd |
| Mobile App | 2.75 | 2.33 | 6.4 | 3rd |
Recommendation:
-
Prioritize Team Collaboration
- Highest problem intensity
- Clear strategic fit
- Feasible to build
-
Park AI Suggestions for now
- Validate value risk first
- Consider: What specific suggestions would be valuable?
- Test with simple rules before ML
-
Deprioritize Mobile App
- High effort, limited reach
- Consider: Progressive web app as interim?
- Revisit when team collaboration is strong
Next Steps:
- Run 5 interviews focused on team collaboration pain points
- Identify top 3 specific collaboration problems
- Prototype and test before committing to full build
Checklists & Templates
Discovery Kickoff Checklist
## Before Starting Discovery ### Define the Scope □ What outcome are we trying to achieve? □ What problem might we solve? □ Who is the target customer? □ What's the timeline for decision? ### Identify Assumptions □ List top 10 assumptions about problem and solution □ Rank by risk level (if wrong, how bad?) □ Identify top 3 to test first ### Plan Activities □ Customer interviews scheduled (minimum 5) □ Data/analytics to review identified □ Prototype or experiment designed □ Success criteria defined ### Align Team □ Cross-functional team identified □ Discovery goals shared □ Calendar blocked for activities
Discovery Summary Template
## Discovery Summary: [Feature/Opportunity] ### Problem Statement [What problem are we solving? For whom?] ### Research Conducted - [X] customer interviews - [X] data analyses - [X] prototype tests - [X] experiments ### Key Findings **What we learned about the problem:** 1. 2. 3. **What we learned about solutions:** 1. 2. 3. ### Risk Assessment | Risk | Level | Mitigation | |------|-------|------------| | Value | | | | Usability | | | | Feasibility | | | | Viability | | | ### Recommendation [Build / Don't Build / Need More Discovery] ### If Building, Success Metrics - Metric 1: - Metric 2: - Metric 3:
Skill Boundaries
What This Skill Does Well
- Structuring persuasive content
- Applying copywriting frameworks
- Creating draft variations
- Analyzing competitor approaches
What This Skill Cannot Do
- Guarantee conversion rates
- Replace brand voice development
- Know your specific audience
- Make final approval decisions
References
- Cagan, Marty. "Inspired: How to Create Tech Products Customers Love" (2018)
- Cagan, Marty & Jones, Chris. "Empowered" (2020)
- Torres, Teresa. "Continuous Discovery Habits" (2021)
- Silicon Valley Product Group (SVPG) resources
- Intercom on Product Management
Related Skills
- customer-discovery - Steve Blank's broader framework
- mom-test - Customer interview techniques
- lean-canvas - Business model validation
- shape-up - Basecamp's build methodology
- design-sprint - Google Ventures sprint
Skill Metadata
- Mode: cyborg
name: product-discovery category: product subcategory: methodology version: 1.0 author: MKTG Skills source_expert: Marty Cagan source_work: Inspired, Empowered difficulty: intermediate estimated_value: $10,000+ product consulting engagement tags: [product, discovery, validation, PM, Cagan, SVPG, risk, prototyping] created: 2026-01-25 updated: 2026-01-25