Claude-Skills identify-assumptions

install
source · Clone the upstream repo
git clone https://github.com/borghei/Claude-Skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/borghei/Claude-Skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/project-management/discovery/identify-assumptions" ~/.claude/skills/borghei-claude-skills-identify-assumptions && rm -rf "$T"
manifest: project-management/discovery/identify-assumptions/SKILL.md
source content

Assumption Mapping Expert

Overview

Systematically identify, categorize, and prioritize the assumptions underlying your product decisions. This skill extends Teresa Torres' four risk categories with four additional categories for new products, and uses a devil's advocate approach from PM, Designer, and Engineer perspectives to surface hidden assumptions.

When to Use

  • After ideation, before committing to build.
  • When a product decision "feels right" but has not been validated.
  • When the team disagrees on risk or priority -- assumptions make disagreements explicit.
  • Before designing experiments -- test the riskiest assumptions first.

Risk Categories

Core 4 Categories (Existing Products)

These four categories come from Teresa Torres' Continuous Discovery Habits and cover the primary risks for features within an established product:

CategoryQuestion It AnswersExample Assumption
ValueWill customers want this?"Users will prefer AI-generated summaries over manual note-taking."
UsabilityCan customers figure out how to use it?"Users will understand the drag-and-drop interface without a tutorial."
ViabilityCan the business sustain this?"The feature will generate enough upgrades to justify the engineering cost."
FeasibilityCan we build this?"Our current infrastructure can handle real-time processing at scale."

Extended 8 Categories (New Products)

For new products, four additional risk categories become critical:

CategoryQuestion It AnswersExample Assumption
EthicsShould we build this? Are there unintended harms?"Collecting location data will not create privacy concerns for our target segment."
Go-to-MarketCan we reach and acquire customers?"Our target segment actively searches for solutions on Google, making SEO viable."
Strategy & ObjectivesDoes this align with where we want to go?"Entering the SMB market will not dilute our enterprise positioning."
TeamDo we have the right people and skills?"Our team can learn the required ML skills within the project timeline."

Methodology

Phase 1: Devil's Advocate Assumption Surfacing

For each product idea or decision, adopt three adversarial perspectives:

PM Devil's Advocate "I challenge whether this is worth building."

  • Is there real demand, or are we projecting our own preferences?
  • Will this move the metric we care about?
  • Can we sustain this economically?
  • Does this align with strategy, or is it a distraction?

Designer Devil's Advocate "I challenge whether users will actually use this."

  • Will users discover this feature?
  • Can they complete the task without help?
  • Does this add complexity that hurts the overall experience?
  • Are we designing for edge cases and assuming they are common?

Engineer Devil's Advocate "I challenge whether we can build and maintain this."

  • Do we have the technical skills and infrastructure?
  • What are the hidden dependencies and integration risks?
  • Can this scale if it succeeds?
  • What is the ongoing maintenance burden?

Phase 2: Categorize Each Assumption

For each assumption surfaced, assign:

FieldOptions
DescriptionClear, specific statement of what must be true
Risk CategoryValue / Usability / Viability / Feasibility / Ethics / Go-to-Market / Strategy / Team
ConfidenceHigh (we have strong evidence) / Medium (some evidence, not conclusive) / Low (gut feeling or no evidence)
Impact1-10 scale (if this assumption is wrong, how bad is it?)

Phase 3: Prioritize Using Impact x Risk Matrix

Calculate a risk score for each assumption:

Risk Score = Impact x (1 - Confidence)

Where confidence maps to: High = 0.8, Medium = 0.5, Low = 0.2

ImpactConfidenceRisk ScoreMeaning
9Low (0.2)7.2Critical -- test immediately
9High (0.8)1.8Important but well-understood
3Low (0.2)2.4Low priority
3High (0.8)0.6Ignore

Phase 4: Classify into Quadrants

Place each assumption on a 2x2 matrix:

                    HIGH IMPACT
                        |
     Proceed with       |     Test Now
     Confidence         |     (highest priority)
                        |
  ──────────────────────┼──────────────────────
                        |
     Defer              |     Investigate
     (low priority)     |     (may be important)
                        |
                    LOW IMPACT

         LOW RISK ◄─────┼─────► HIGH RISK
QuadrantImpactRiskAction
Test NowHighHighDesign an experiment immediately
ProceedHighLowMove forward with monitoring
InvestigateLowHighGather more data, may upgrade to Test Now
DeferLowLowAccept the risk, revisit if context changes

Phase 5: Suggest Tests

For each "Test Now" assumption, recommend a validation approach:

Assumption TypeSuggested Test Methods
Value assumptionsCustomer interviews, fake door test, landing page test
Usability assumptionsUsability test (5 users), prototype walkthrough
Viability assumptionsFinancial modeling, pricing experiment, unit economics analysis
Feasibility assumptionsTechnical spike, proof of concept, architecture review
Ethics assumptionsEthics review board, user consent study, regulatory consultation
Go-to-Market assumptionsChannel experiment, SEO keyword test, paid ad test
Strategy assumptionsStrategy review with leadership, competitive analysis
Team assumptionsSkills assessment, hiring timeline analysis, training feasibility

Python Tool: assumption_tracker.py

Track and prioritize assumptions using the CLI tool:

# Run with demo data
python3 scripts/assumption_tracker.py --demo

# Run with custom input
python3 scripts/assumption_tracker.py input.json

# Output as JSON
python3 scripts/assumption_tracker.py input.json --format json

Input Format

{
  "assumptions": [
    {
      "description": "Users will prefer AI summaries over manual notes",
      "category": "value",
      "confidence": "low",
      "impact": 9
    }
  ]
}

Output

Sorted by risk priority with quadrant classification and suggested actions.

See

scripts/assumption_tracker.py
for full documentation.

Output Format

Assumption Registry

#AssumptionCategoryConfidenceImpactRisk ScoreQuadrant
1...ValueLow97.2Test Now
2...FeasibilityMedium84.0Test Now
3...UsabilityHigh71.4Proceed
4...GTMLow32.4Investigate

Action Plan for "Test Now" Assumptions

For each assumption in the Test Now quadrant, document:

  • Assumption description
  • Why it is high risk
  • Suggested validation method
  • Owner and timeline

Use

assets/assumption_map_template.md
for the full template.

Integration with Other Discovery Skills

  • Use
    brainstorm-ideas/
    to generate ideas whose assumptions you will map.
  • Feed "Test Now" assumptions into
    brainstorm-experiments/
    for experiment design.
  • Run
    pre-mortem/
    to catch risks that assumption mapping might miss (especially elephants).

Troubleshooting

SymptomLikely CauseResolution
All assumptions classified as "Test Now"Impact scores inflated or confidence levels consistently set to "low"Calibrate impact scoring with team; use relative ranking (force distribution across quadrants)
Validation fails with category errorCategory string does not match valid set exactlyUse lowercase:
value
,
usability
,
viability
,
feasibility
,
ethics
,
gtm
,
strategy
,
team
Risk scores cluster around the same valueImpact and confidence values lack variance across assumptionsUse the full 1-10 impact scale; challenge the team to differentiate high vs. medium vs. low confidence with evidence
Team generates fewer than 10 assumptionsDevil's Advocate perspectives not applied systematicallyRun all three perspectives (PM, Designer, Engineer) independently before combining; use the prompts in Phase 1
"Defer" quadrant is emptyAll assumptions scored as high impact, or low-impact assumptions not capturedInclude operational and edge-case assumptions; not every assumption is existential
Suggested tests are too genericTool uses category-level test suggestions, not assumption-specific onesUse the category suggestion as a starting point; tailor the test method using
brainstorm-experiments/
skill
Assumptions not updated after experimentsNo feedback loop from experiment results back to assumption trackerRe-run
assumption_tracker.py
with updated confidence levels after each experiment completes

Success Criteria

  • All product decisions have at least 8-15 explicit assumptions documented before build commitment
  • Every "Test Now" assumption has an assigned owner, validation method, and timeline
  • Risk scores are calculated consistently using the Impact x (1 - Confidence) formula
  • Assumptions are re-scored after each validation experiment (confidence levels update)
  • At least 80% of "Test Now" assumptions are validated or invalidated before major build decisions
  • Assumption map is reviewed and updated at every product discovery cycle (weekly or bi-weekly)
  • The team can articulate the top 3 riskiest assumptions for any active initiative

Scope & Limitations

In Scope:

  • Systematic assumption identification using PM/Designer/Engineer devil's advocate perspectives
  • 8-category risk classification (Value, Usability, Viability, Feasibility, Ethics, Go-to-Market, Strategy, Team)
  • Quantitative risk scoring using Impact x (1 - Confidence) formula
  • Quadrant classification (Test Now, Proceed, Investigate, Defer) with suggested validation methods
  • Assumption registry with priority sorting and action plan generation

Out of Scope:

  • Running validation experiments (see
    brainstorm-experiments/
    skill)
  • Product strategy or roadmap decisions (see
    execution/outcome-roadmap/
    )
  • Technical feasibility deep-dives or architecture reviews (see
    engineering/
    skills)
  • Financial modeling for viability assumptions (see
    finance/
    domain skills)

Important Caveats:

  • Confidence levels (high/medium/low) map to fixed numeric values (0.8/0.5/0.2). This is a simplification; real confidence is continuous.
  • The quadrant threshold for "high impact" is set at 7/10. Adjust this threshold for your risk tolerance.
  • Assumption mapping is most effective when done collaboratively (Product Trio), not by a single person.

Integration Points

IntegrationDirectionDescription
brainstorm-ideas/
Receives fromIdeas generated become the subjects whose assumptions are mapped
brainstorm-experiments/
Feeds into"Test Now" assumptions become hypotheses for experiment design
pre-mortem/
ComplementsPre-mortem catches risks that assumption mapping may miss (especially elephants)
execution/create-prd/
Feeds intoValidated assumptions populate the PRD Assumptions section (Section 7)
execution/brainstorm-okrs/
Feeds intoViability assumptions inform OKR key result selection and confidence levels
senior-pm/
Feeds intoHigh-impact assumptions feed into portfolio risk registers

Tool Reference

assumption_tracker.py

Tracks, scores, and prioritizes product assumptions using an Impact x Risk matrix with quadrant classification.

FlagTypeDefaultDescription
input_file
positional(optional)Path to JSON file with assumptions array
--demo
flagoffRun with built-in sample data (8 assumptions across all categories)
--format
choice
text
Output format:
text
or
json

Input fields per assumption:

  • description
    (required): Clear statement of what must be true
  • category
    (required): One of
    value
    ,
    usability
    ,
    viability
    ,
    feasibility
    ,
    ethics
    ,
    gtm
    ,
    strategy
    ,
    team
  • confidence
    (required): One of
    high
    ,
    medium
    ,
    low
  • impact
    (required): Integer 1-10

References

  • Teresa Torres, Continuous Discovery Habits (2021)
  • David J. Bland & Alexander Osterwalder, Testing Business Ideas (2019)
  • Ash Maurya, Running Lean (2012)
  • Marty Cagan, Inspired (2018)