Product-org-os prioritize-features

'Prioritize a list of features or initiatives using proven frameworks (RICE, ICE, Kano, MoSCoW, WSJF, Value vs Effort). Produces scored, ranked output with rationale. Activate when: "prioritize",

install
source · Clone the upstream repo
git clone https://github.com/yohayetsion/product-org-os
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/yohayetsion/product-org-os "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/prioritize-features" ~/.claude/skills/yohayetsion-product-org-os-prioritize-features && rm -rf "$T"
manifest: skills/prioritize-features/SKILL.md
source content

Document Intelligence

This skill supports three modes: Create, Update, and Find.

Mode Detection

SignalModeConfidence
"update", "revise", "re-score" in inputUPDATE100%
File path provided (
@path/to/prioritization.md
)
UPDATE100%
"create", "new", "prioritize these" in inputCREATE100%
"find", "search", "list prioritizations"FIND100%
"the prioritization", "our ranking"UPDATE85%
Just a list of featuresCREATE60%

Threshold: >=85% auto-proceed | 70-84% state assumption | <70% ask user

Mode Behaviors

CREATE: Gather feature list, select framework(s), score, produce ranked output.

UPDATE:

  1. Read existing prioritization (search if path not provided)
  2. Preserve scores for unchanged items
  3. Re-score modified or added items
  4. Show diff summary: "Added: [items]. Re-scored: [items]. Unchanged: [items]."

FIND:

  1. Search paths below for prioritization documents
  2. Present results: title, framework used, item count, path
  3. Ask: "Update one of these, or create new?"

Search Locations

  • product/
  • roadmap/
  • planning/
  • prioritization/

Gotchas

  • Prioritization criteria must be stated explicitly — different frameworks (RICE, ICE, Kano, MoSCoW, WSJF, Value vs Effort) give different results
  • Never fabricate reach, impact, or confidence scores — use data or explicitly label as team estimates
  • Prioritization without strategic alignment is just sorting — connect to strategic bets

Vision to Value Phase

Phase 3: Strategic Commitments - Prioritization converts decisions into executable commitments by ranking what to build and in what order.

Prerequisites: Phase 2 complete (strategic decisions made, business viability confirmed) Outputs used by:

/product-roadmap
,
/prd
,
/commitment-check

Methodology

<!-- Source: RICE Scoring — Intercom (Sean McBride, ~2014). Formula: (Reach x Impact x Confidence) / Effort. Originally developed at Intercom to prioritize product ideas objectively. Reach = people or events per time period. Impact = estimated effect per person. Confidence = percentage certainty in estimates. Effort = person-months of work. --> <!-- Source: Kano Model — Noriaki Kano, "Attractive Quality and Must-Be Quality" (1984), Tokyo University of Science. Categories: Must-Be (Basic), Performance (One-Dimensional), Attractive (Delighters), Indifferent, Reverse. Uses paired functional/dysfunctional questions to classify features. Key insight: satisfaction is not linear — some features only cause dissatisfaction when absent, others delight only when present. --> <!-- Source: MoSCoW Prioritization — Dai Clegg, Oracle UK (1994). Adopted by DSDM Consortium for Agile projects. Must Have, Should Have, Could Have, Won't Have (this time). Budget allocation rule: Must ~60%, Should ~20%, Could ~20%. Key principle: Must Haves are non-negotiable for the minimum usable subset. --> <!-- Source: WSJF (Weighted Shortest Job First) — Don Reinertsen, "The Principles of Product Development Flow" (2009). Adopted by SAFe (Scaled Agile Framework). Formula: (Business Value + Time Criticality + Risk Reduction/Opportunity Enablement) / Job Size. Based on Cost of Delay economics. Key insight: prioritize by economic value delivered per unit of time, not just by value alone. --> <!-- Source: ICE Scoring — Sean Ellis, GrowthHackers (~2010). Originally designed for prioritizing growth experiments. Simpler than RICE (no Reach component) but less precise. Each dimension scored 1-10. --> <!-- Source: Value vs Effort Matrix — common product management 2×2 framework. Also known as Impact/Effort Matrix or Priority Matrix. Popularized by multiple sources including Eisenhower Matrix variants adapted for product work. -->

Framework Selection Guide

FrameworkBest ForStrengthsLimitations
RICEFeature backlogs, product teamsQuantitative, accounts for reachEffort estimation can be unreliable
KanoCustomer-facing featuresReveals non-obvious prioritiesRequires customer survey data
MoSCoWRelease planning, MVP scopingSimple, stakeholder-friendlySubjective without scoring
WSJFAgile/SAFe teams, flow-basedAccounts for time valueRequires relative sizing discipline
ICEGrowth experiments, rapid prioritizationSimple, fast, equal weightingLess precise than RICE (no Reach)
Value vs EffortQuick triage, executive alignmentVisual, intuitive 2x2Binary classification, no granular scoring

When unsure, ask the user which framework to apply. If they say "just prioritize", default to RICE.

RICE Scoring

ComponentScaleGuidance
ReachPeople or events per quarterHow many users/customers will this affect in a quarter?
Impact3 = Massive, 2 = High, 1 = Medium, 0.5 = Low, 0.25 = MinimalHow much will this move the needle per person reached?
Confidence100% = High, 80% = Medium, 50% = LowHow confident are you in these estimates?
EffortPerson-monthsHow many person-months will this take?

Formula: RICE Score = (Reach x Impact x Confidence) / Effort

Kano Classification

For each feature, ask the functional/dysfunctional question pair:

  • Functional: "How would you feel if this feature were present?"
  • Dysfunctional: "How would you feel if this feature were absent?"
Response OptionsCode
I like itL
I expect itE
I am neutralN
I can tolerate itT
I dislike itD

Classification matrix (Functional x Dysfunctional):

LikeExpectNeutralTolerateDislike
LikeQAAAO
ExpectRIIIM
NeutralRIIIM
TolerateRIIIM
DislikeRRRRQ

M = Must-Be, O = One-Dimensional, A = Attractive, I = Indifferent, R = Reverse, Q = Questionable

Priority order: Must-Be > One-Dimensional > Attractive > Indifferent

MoSCoW Classification

CategoryDefinitionBudget Target
Must HaveNon-negotiable for this release. Without it, the release is a failure.~60%
Should HaveImportant but not critical. Painful to leave out but workarounds exist.~20%
Could HaveDesirable. Included if time/budget allows.~20%
Won't HaveAgreed to be out of scope this time. May be reconsidered later.0%

WSJF Scoring

ComponentScale (Fibonacci: 1, 2, 3, 5, 8, 13, 20)Question
Business ValueRelativeWhat is the relative business value?
Time CriticalityRelativeHow much does delay cost us?
Risk Reduction / Opportunity EnablementRelativeDoes this reduce risk or enable new opportunities?
Job SizeRelativeHow big is this work item?

Formula: WSJF = (Business Value + Time Criticality + Risk Reduction) / Job Size

ICE Scoring

ComponentScale (1-10)Guidance
Impact1 = Minimal, 10 = MassiveHow much will this move the target metric?
Confidence1 = Pure guess, 10 = Data-backed certaintyHow confident are you in the impact estimate?
Ease1 = Very hard, 10 = TrivialHow easy is this to implement? (Inverse of effort)

Formula: ICE Score = Impact × Confidence × Ease

Score interpretation: Max = 1000, practical range 1-500. Higher = prioritize first.

When to use over RICE: ICE is faster to score (no Reach estimation needed) and works well for growth experiments, A/B tests, and quick-iteration contexts where all items have similar reach. Use RICE when reach varies significantly across features.

Value vs Effort Matrix

Plot each feature on a 2×2 matrix with Value (vertical axis) and Effort (horizontal axis):

Low EffortHigh Effort
High ValueQuick Wins — Do first. High ROI, low investment.Big Bets — Plan carefully. High reward but significant investment.
Low ValueFill-Ins — Do if capacity allows. Low cost, low reward.Money Pit — Avoid. High cost, low return.

Scoring approach: Rate Value and Effort each on a 1-5 scale, then classify:

  • Value >= 4, Effort <= 2 → Quick Win
  • Value >= 4, Effort >= 3 → Big Bet
  • Value <= 3, Effort <= 2 → Fill-In
  • Value <= 3, Effort >= 3 → Money Pit

Priority order: Quick Wins > Big Bets > Fill-Ins > Money Pit (avoid)

When to use: Best for initial triage with stakeholders, executive-level alignment, or when you need a fast visual prioritization without detailed scoring. Often used as a first pass before applying RICE or WSJF to the Quick Wins and Big Bets quadrants.

Output Structure

# Feature Prioritization: [Context/Product Name]

**Date**: [YYYY-MM-DD]
**Owner**: [Who owns this prioritization]
**Framework(s)**: [RICE / ICE / Kano / MoSCoW / WSJF / Value vs Effort]
**Input source**: [Backlog, stakeholder request, etc.]

## Features Under Consideration

| # | Feature/Initiative | Description |
|---|-------------------|-------------|
| 1 | [Feature name] | [Brief description] |
| 2 | [Feature name] | [Brief description] |

## Scoring

### [Framework Name] Scores

[Framework-specific scoring table — see framework sections above]

## Ranked Results

| Rank | Feature | Score | Category/Tier | Rationale |
|------|---------|-------|---------------|-----------|
| 1 | [Feature] | [Score] | [Must/High/etc.] | [Why it ranked here] |
| 2 | [Feature] | [Score] | [Should/Med/etc.] | [Why it ranked here] |

## Key Insights

- **Top priority**: [Feature] because [reason]
- **Surprising result**: [Feature] ranked [higher/lower] than expected because [reason]
- **Tension**: [Feature A] vs [Feature B] — [tradeoff description]

## Assumptions & Caveats

- [Key assumptions that affect the scoring]
- [Data gaps that reduce confidence]
- [Recommendations for improving confidence]

## Next Steps

- [ ] Validate scores with [stakeholders]
- [ ] Feed top priorities into roadmap via `/product-roadmap`
- [ ] Design experiments for low-confidence items via `/experiment-design`

Instructions

  1. Ask the user for: (a) the list of features/initiatives, (b) which framework(s) to use (default: RICE)
  2. If the user provides features without descriptions, ask for brief descriptions
  3. For RICE: ask the user to estimate Reach and Effort; propose Impact and Confidence based on context
  4. For ICE: score all three dimensions 1-10; ideal for growth experiments and rapid prioritization
  5. For Kano: note that ideal Kano requires customer survey data; offer to classify based on product knowledge as a proxy
  6. For MoSCoW: facilitate classification discussion; challenge "Must Have" inflation
  7. For WSJF: use relative sizing (Fibonacci); anchor with the smallest item as 1
  8. For Value vs Effort: rate each dimension 1-5; classify into quadrants; best as a first-pass triage
  9. Multiple frameworks can be applied to the same list for cross-validation
  10. Save output as markdown file
  11. Offer
    /product-roadmap
    to convert prioritized list into a roadmap

Integration

  • Links to
    /product-roadmap
    (prioritized features feed roadmap themes)
  • Links to
    /commitment-check
    (validate that prioritized commitments are achievable)
  • Links to
    /assumption-map
    (surface assumptions behind scoring)
  • Links to
    /experiment-design
    (test assumptions for low-confidence scores)
  • Links to
    /context-recall
    (check past prioritization decisions for consistency)