Product-org-os prioritize-features
'Prioritize a list of features or initiatives using proven frameworks (RICE, ICE, Kano, MoSCoW, WSJF, Value vs Effort). Produces scored, ranked output with rationale. Activate when: "prioritize",
git clone https://github.com/yohayetsion/product-org-os
T=$(mktemp -d) && git clone --depth=1 https://github.com/yohayetsion/product-org-os "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/prioritize-features" ~/.claude/skills/yohayetsion-product-org-os-prioritize-features && rm -rf "$T"
skills/prioritize-features/SKILL.mdDocument Intelligence
This skill supports three modes: Create, Update, and Find.
Mode Detection
| Signal | Mode | Confidence |
|---|---|---|
| "update", "revise", "re-score" in input | UPDATE | 100% |
File path provided () | UPDATE | 100% |
| "create", "new", "prioritize these" in input | CREATE | 100% |
| "find", "search", "list prioritizations" | FIND | 100% |
| "the prioritization", "our ranking" | UPDATE | 85% |
| Just a list of features | CREATE | 60% |
Threshold: >=85% auto-proceed | 70-84% state assumption | <70% ask user
Mode Behaviors
CREATE: Gather feature list, select framework(s), score, produce ranked output.
UPDATE:
- Read existing prioritization (search if path not provided)
- Preserve scores for unchanged items
- Re-score modified or added items
- Show diff summary: "Added: [items]. Re-scored: [items]. Unchanged: [items]."
FIND:
- Search paths below for prioritization documents
- Present results: title, framework used, item count, path
- Ask: "Update one of these, or create new?"
Search Locations
product/roadmap/planning/prioritization/
Gotchas
- Prioritization criteria must be stated explicitly — different frameworks (RICE, ICE, Kano, MoSCoW, WSJF, Value vs Effort) give different results
- Never fabricate reach, impact, or confidence scores — use data or explicitly label as team estimates
- Prioritization without strategic alignment is just sorting — connect to strategic bets
Vision to Value Phase
Phase 3: Strategic Commitments - Prioritization converts decisions into executable commitments by ranking what to build and in what order.
Prerequisites: Phase 2 complete (strategic decisions made, business viability confirmed) Outputs used by:
/product-roadmap, /prd, /commitment-check
Methodology
<!-- Source: RICE Scoring — Intercom (Sean McBride, ~2014). Formula: (Reach x Impact x Confidence) / Effort. Originally developed at Intercom to prioritize product ideas objectively. Reach = people or events per time period. Impact = estimated effect per person. Confidence = percentage certainty in estimates. Effort = person-months of work. --> <!-- Source: Kano Model — Noriaki Kano, "Attractive Quality and Must-Be Quality" (1984), Tokyo University of Science. Categories: Must-Be (Basic), Performance (One-Dimensional), Attractive (Delighters), Indifferent, Reverse. Uses paired functional/dysfunctional questions to classify features. Key insight: satisfaction is not linear — some features only cause dissatisfaction when absent, others delight only when present. --> <!-- Source: MoSCoW Prioritization — Dai Clegg, Oracle UK (1994). Adopted by DSDM Consortium for Agile projects. Must Have, Should Have, Could Have, Won't Have (this time). Budget allocation rule: Must ~60%, Should ~20%, Could ~20%. Key principle: Must Haves are non-negotiable for the minimum usable subset. --> <!-- Source: WSJF (Weighted Shortest Job First) — Don Reinertsen, "The Principles of Product Development Flow" (2009). Adopted by SAFe (Scaled Agile Framework). Formula: (Business Value + Time Criticality + Risk Reduction/Opportunity Enablement) / Job Size. Based on Cost of Delay economics. Key insight: prioritize by economic value delivered per unit of time, not just by value alone. --> <!-- Source: ICE Scoring — Sean Ellis, GrowthHackers (~2010). Originally designed for prioritizing growth experiments. Simpler than RICE (no Reach component) but less precise. Each dimension scored 1-10. --> <!-- Source: Value vs Effort Matrix — common product management 2×2 framework. Also known as Impact/Effort Matrix or Priority Matrix. Popularized by multiple sources including Eisenhower Matrix variants adapted for product work. -->Framework Selection Guide
| Framework | Best For | Strengths | Limitations |
|---|---|---|---|
| RICE | Feature backlogs, product teams | Quantitative, accounts for reach | Effort estimation can be unreliable |
| Kano | Customer-facing features | Reveals non-obvious priorities | Requires customer survey data |
| MoSCoW | Release planning, MVP scoping | Simple, stakeholder-friendly | Subjective without scoring |
| WSJF | Agile/SAFe teams, flow-based | Accounts for time value | Requires relative sizing discipline |
| ICE | Growth experiments, rapid prioritization | Simple, fast, equal weighting | Less precise than RICE (no Reach) |
| Value vs Effort | Quick triage, executive alignment | Visual, intuitive 2x2 | Binary classification, no granular scoring |
When unsure, ask the user which framework to apply. If they say "just prioritize", default to RICE.
RICE Scoring
| Component | Scale | Guidance |
|---|---|---|
| Reach | People or events per quarter | How many users/customers will this affect in a quarter? |
| Impact | 3 = Massive, 2 = High, 1 = Medium, 0.5 = Low, 0.25 = Minimal | How much will this move the needle per person reached? |
| Confidence | 100% = High, 80% = Medium, 50% = Low | How confident are you in these estimates? |
| Effort | Person-months | How many person-months will this take? |
Formula: RICE Score = (Reach x Impact x Confidence) / Effort
Kano Classification
For each feature, ask the functional/dysfunctional question pair:
- Functional: "How would you feel if this feature were present?"
- Dysfunctional: "How would you feel if this feature were absent?"
| Response Options | Code |
|---|---|
| I like it | L |
| I expect it | E |
| I am neutral | N |
| I can tolerate it | T |
| I dislike it | D |
Classification matrix (Functional x Dysfunctional):
| Like | Expect | Neutral | Tolerate | Dislike | |
|---|---|---|---|---|---|
| Like | Q | A | A | A | O |
| Expect | R | I | I | I | M |
| Neutral | R | I | I | I | M |
| Tolerate | R | I | I | I | M |
| Dislike | R | R | R | R | Q |
M = Must-Be, O = One-Dimensional, A = Attractive, I = Indifferent, R = Reverse, Q = Questionable
Priority order: Must-Be > One-Dimensional > Attractive > Indifferent
MoSCoW Classification
| Category | Definition | Budget Target |
|---|---|---|
| Must Have | Non-negotiable for this release. Without it, the release is a failure. | ~60% |
| Should Have | Important but not critical. Painful to leave out but workarounds exist. | ~20% |
| Could Have | Desirable. Included if time/budget allows. | ~20% |
| Won't Have | Agreed to be out of scope this time. May be reconsidered later. | 0% |
WSJF Scoring
| Component | Scale (Fibonacci: 1, 2, 3, 5, 8, 13, 20) | Question |
|---|---|---|
| Business Value | Relative | What is the relative business value? |
| Time Criticality | Relative | How much does delay cost us? |
| Risk Reduction / Opportunity Enablement | Relative | Does this reduce risk or enable new opportunities? |
| Job Size | Relative | How big is this work item? |
Formula: WSJF = (Business Value + Time Criticality + Risk Reduction) / Job Size
ICE Scoring
| Component | Scale (1-10) | Guidance |
|---|---|---|
| Impact | 1 = Minimal, 10 = Massive | How much will this move the target metric? |
| Confidence | 1 = Pure guess, 10 = Data-backed certainty | How confident are you in the impact estimate? |
| Ease | 1 = Very hard, 10 = Trivial | How easy is this to implement? (Inverse of effort) |
Formula: ICE Score = Impact × Confidence × Ease
Score interpretation: Max = 1000, practical range 1-500. Higher = prioritize first.
When to use over RICE: ICE is faster to score (no Reach estimation needed) and works well for growth experiments, A/B tests, and quick-iteration contexts where all items have similar reach. Use RICE when reach varies significantly across features.
Value vs Effort Matrix
Plot each feature on a 2×2 matrix with Value (vertical axis) and Effort (horizontal axis):
| Low Effort | High Effort | |
|---|---|---|
| High Value | Quick Wins — Do first. High ROI, low investment. | Big Bets — Plan carefully. High reward but significant investment. |
| Low Value | Fill-Ins — Do if capacity allows. Low cost, low reward. | Money Pit — Avoid. High cost, low return. |
Scoring approach: Rate Value and Effort each on a 1-5 scale, then classify:
- Value >= 4, Effort <= 2 → Quick Win
- Value >= 4, Effort >= 3 → Big Bet
- Value <= 3, Effort <= 2 → Fill-In
- Value <= 3, Effort >= 3 → Money Pit
Priority order: Quick Wins > Big Bets > Fill-Ins > Money Pit (avoid)
When to use: Best for initial triage with stakeholders, executive-level alignment, or when you need a fast visual prioritization without detailed scoring. Often used as a first pass before applying RICE or WSJF to the Quick Wins and Big Bets quadrants.
Output Structure
# Feature Prioritization: [Context/Product Name] **Date**: [YYYY-MM-DD] **Owner**: [Who owns this prioritization] **Framework(s)**: [RICE / ICE / Kano / MoSCoW / WSJF / Value vs Effort] **Input source**: [Backlog, stakeholder request, etc.] ## Features Under Consideration | # | Feature/Initiative | Description | |---|-------------------|-------------| | 1 | [Feature name] | [Brief description] | | 2 | [Feature name] | [Brief description] | ## Scoring ### [Framework Name] Scores [Framework-specific scoring table — see framework sections above] ## Ranked Results | Rank | Feature | Score | Category/Tier | Rationale | |------|---------|-------|---------------|-----------| | 1 | [Feature] | [Score] | [Must/High/etc.] | [Why it ranked here] | | 2 | [Feature] | [Score] | [Should/Med/etc.] | [Why it ranked here] | ## Key Insights - **Top priority**: [Feature] because [reason] - **Surprising result**: [Feature] ranked [higher/lower] than expected because [reason] - **Tension**: [Feature A] vs [Feature B] — [tradeoff description] ## Assumptions & Caveats - [Key assumptions that affect the scoring] - [Data gaps that reduce confidence] - [Recommendations for improving confidence] ## Next Steps - [ ] Validate scores with [stakeholders] - [ ] Feed top priorities into roadmap via `/product-roadmap` - [ ] Design experiments for low-confidence items via `/experiment-design`
Instructions
- Ask the user for: (a) the list of features/initiatives, (b) which framework(s) to use (default: RICE)
- If the user provides features without descriptions, ask for brief descriptions
- For RICE: ask the user to estimate Reach and Effort; propose Impact and Confidence based on context
- For ICE: score all three dimensions 1-10; ideal for growth experiments and rapid prioritization
- For Kano: note that ideal Kano requires customer survey data; offer to classify based on product knowledge as a proxy
- For MoSCoW: facilitate classification discussion; challenge "Must Have" inflation
- For WSJF: use relative sizing (Fibonacci); anchor with the smallest item as 1
- For Value vs Effort: rate each dimension 1-5; classify into quadrants; best as a first-pass triage
- Multiple frameworks can be applied to the same list for cross-validation
- Save output as markdown file
- Offer
to convert prioritized list into a roadmap/product-roadmap
Integration
- Links to
(prioritized features feed roadmap themes)/product-roadmap - Links to
(validate that prioritized commitments are achievable)/commitment-check - Links to
(surface assumptions behind scoring)/assumption-map - Links to
(test assumptions for low-confidence scores)/experiment-design - Links to
(check past prioritization decisions for consistency)/context-recall