Claude-Skills brainstorm-okrs

install
source · Clone the upstream repo
git clone https://github.com/borghei/Claude-Skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/borghei/Claude-Skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/project-management/execution/brainstorm-okrs" ~/.claude/skills/borghei-claude-skills-brainstorm-okrs && rm -rf "$T"
manifest: project-management/execution/brainstorm-okrs/SKILL.md
source content

OKR Brainstorming Expert

The agent generates and validates outcome-focused OKR sets using Christina Wodtke's Radical Focus methodology. It produces inspirational objectives with measurable key results, applies counter-metric tests, and scores quality against proven criteria.

Workflow

1. Identify the Theme

The agent asks: "What is the single most important thing this team needs to change this quarter?" The answer becomes the theme. Every OKR must connect back to this theme.

Validation checkpoint: If the user provides more than one theme, the agent pushes back. One theme per team per quarter. Multiple themes means no focus.

2. Generate 3 Distinct OKR Sets

For each set, the agent produces:

  1. Objective -- One qualitative, inspirational statement (no numbers)
  2. Key Result 1 -- Primary metric proving progress
  3. Key Result 2 -- Secondary metric capturing a different dimension
  4. Key Result 3 -- Counter-metric preventing gaming of KR1 and KR2
  5. Rationale -- 2-3 sentences on why this set matters and how it connects to the theme

Objective quality criteria:

  • Qualitative (numbers belong in key results)
  • Inspirational (team would be excited to achieve it)
  • Time-bound (achievable within one quarter)
  • Actionable (team can directly influence the outcome)

Key result quality criteria:

  • Measurable (has a metric with a number)
  • Outcome-focused (measures results, not activities)
  • Set at 60-70% confidence (not sandbagging, not demoralizing)
  • Limited to 3 per objective

3. Apply the Counter-Metric Test

For every pair of key results, the agent asks: "Could we hit these numbers by doing something harmful?" If yes, it adds a counter-metric.

Example: If KR1 is "Increase sign-ups by 40%", a counter-metric is "Maintain activation rate above 60%." Without it, the team could game KR1 by lowering sign-up barriers so far that unqualified users flood in.

4. Validate with Tool

python scripts/okr_validator.py --input okrs.json

The validator scores each OKR set and surfaces quality issues: disguised tasks, missing metrics, output-framed key results, or missing counter-metrics.

Validation checkpoint: Any OKR set scoring below 70% must be revised before committing.

Example: Quarterly OKR Generation

Input: Theme is "retention" for a SaaS product team.

Output:

OKR Set 1:
  Objective: "Become the product teams can't imagine leaving"
  KR1: Reduce monthly churn from 4.2% to 2.5%
  KR2: Increase 90-day retention cohort from 68% to 82%
  KR3 (counter): Maintain NPS score above 45 (prevent forced lock-in tactics)
  Rationale: Churn is the top revenue leak. Improving retention directly
  increases LTV and reduces pressure on acquisition spend.

OKR Set 2:
  Objective: "Make our onboarding so good that users hit value in their first session"
  KR1: Increase Day-1 activation rate from 34% to 55%
  KR2: Reduce time-to-first-value from 12 minutes to under 4 minutes
  KR3 (counter): Maintain support ticket volume below 200/week (don't hide complexity)
  Rationale: Users who activate on Day 1 retain at 3x the rate. Onboarding
  is the highest-leverage retention lever.

OKR Set 3:
  Objective: "Turn our power users into vocal advocates"
  KR1: Increase referral-sourced signups from 8% to 20% of new users
  KR2: Grow active community members from 500 to 2,000
  KR3 (counter): Maintain power user retention above 95% (don't distract them)
  Rationale: Advocacy compounds. Referred users have 37% higher retention
  than paid-acquisition users.
$ python scripts/okr_validator.py --input okrs.json

OKR Validation Results
======================
Set 1: 92/100 - PASS
  Objective: Qualitative, inspirational, time-bound
  KR1: Measurable, outcome-focused, stretch target
  KR2: Measurable, different dimension from KR1
  KR3: Valid counter-metric for churn reduction

Set 2: 88/100 - PASS
  Objective: Qualitative, inspirational, time-bound
  KR1: Measurable, outcome-focused
  KR2: Measurable, tracks different dimension
  KR3: Valid counter-metric
  Note: "under 4 minutes" - verify baseline measurement exists

Set 3: 85/100 - PASS
  Objective: Qualitative, inspirational
  KR1: Measurable, outcome-focused
  KR2: Measurable, but "active" needs precise definition
  KR3: Valid counter-metric

Common OKR Mistakes

MistakeExampleFix
Disguised task"Launch the mobile app"Ask "why?" -- measure the outcome the launch enables
Too many OKRs5 objectives per teamPick 1, maybe 2. More means no focus
100% confidenceTarget you know you will hitStretch to 60-70% confidence
Activity metric"Publish 12 blog posts"Measure impact: "Increase organic traffic by 30%"
Set and forgetReview only at quarter endWeekly check-ins with confidence scoring
Top-down onlyAll OKRs from leadershipCombine top-down direction with bottom-up team insight

OKRs vs KPIs vs North Star Metric

ConceptPurposeCadenceExample
North Star MetricSingle metric capturing core value deliveryPermanentWeekly active users completing a workflow
KPIsHealth indicators across the businessOngoingRevenue, churn rate, response time
OKRsAmbitious quarterly goals that move KPIsQuarterly"Become the fastest onboarding in our category"

Relationship: OKRs are the lever pulled to move KPIs toward the North Star Metric. KPIs indicate business health. The NSM indicates core value delivery. OKRs define what changes this quarter.

Tools

ToolPurposeCommand
okr_validator.py
Validate and score OKR sets
python scripts/okr_validator.py --input okrs.json
okr_validator.py
Run demo validation
python scripts/okr_validator.py --demo

Troubleshooting

SymptomLikely CauseResolution
OKR set scores below 70% consistentlyKey results framed as tasks/outputs instead of outcomes, or objective contains numbersAsk "So what?" for each KR until you reach a measurable outcome; remove numbers from objectives
Validator flags "output-oriented language"KR description starts with verbs like "launch", "build", "implement", "ship"Reframe: "Launch mobile app" becomes "Increase mobile-originated revenue from 0% to 15%"
Team sets 5+ objectives per quarterLack of strategic focus or inability to say noEnforce 1 theme per team per quarter; use the Radical Focus constraint: one objective, maybe two
Key results hit 100% every quarterTargets are sandbagged at 100% confidenceStretch to 60-70% confidence; if you hit every KR, you are not being ambitious enough
Counter-metrics missing from OKR setsTeam did not apply the gaming test to KR pairsFor every pair of KRs, ask: "Could we hit these numbers by doing something harmful?" Add a counter-metric if yes
OKRs set and forgotten until quarter endNo weekly check-in rhythm establishedImplement weekly confidence scoring (red/yellow/green) per KR; teams with weekly check-ins complete 43% more goals
Validator rejects input JSONSchema mismatch: missing
okr_sets
key or
key_results
array per set
Ensure JSON has
okr_sets
array, each with
objective
string and
key_results
array containing
description
,
metric
,
target_value
,
current_value

Success Criteria

  • Each OKR set scores above 80/100 on the validator before committing to the quarter
  • Maximum 1-2 objectives per team per quarter (focus over breadth)
  • Every objective is qualitative and inspirational (no numbers in the objective itself)
  • Each objective has exactly 3 key results: primary metric, secondary dimension, and counter-metric
  • Key results are set at 60-70% confidence (stretch, not sandbagged)
  • Weekly confidence check-ins are conducted, not just end-of-quarter reviews
  • OKR retrospectives run at quarter end with structured review of what was learned

Scope & Limitations

In Scope:

  • OKR brainstorming using Christina Wodtke's Radical Focus methodology
  • Generating 3 distinct OKR sets per theme with counter-metric testing
  • Automated validation and scoring of OKR quality (output detection, metric presence, structural checks)
  • Guidance on OKR vs. KPI vs. North Star Metric distinctions
  • Common OKR mistake identification and remediation

Out of Scope:

  • OKR tracking and progress monitoring over the quarter (use dedicated OKR platforms)
  • Company-level OKR cascade and alignment across teams (see
    senior-pm/
    for portfolio alignment)
  • Individual performance-linked OKRs (OKRs should be team goals, not performance reviews)
  • Metric instrumentation or analytics setup for measuring key results

Important Caveats:

  • OKRs work best when combined with weekly check-ins. Teams that review OKRs only at quarter end see 30-45% lower completion rates.
  • The validator catches structural issues but cannot assess strategic quality. A perfectly scored OKR can still be the wrong goal.
  • OKRs should be aligned top-down (strategic direction) and bottom-up (team insight). Pure top-down OKRs reduce team ownership.

Integration Points

IntegrationDirectionDescription
scrum-master/
Receives fromSprint velocity and capacity data inform realistic KR target-setting
senior-pm/
Receives fromPortfolio strategic priorities shape quarterly OKR themes
execution/outcome-roadmap/
Feeds intoOKR key results become success metrics for roadmap Now/Next items
execution/prioritization-frameworks/
ComplementsPrioritized initiatives inform which OKR theme to focus on
discovery/identify-assumptions/
Receives fromValidated assumptions increase confidence in OKR target feasibility
discovery/brainstorm-experiments/
Feeds intoExperiment metrics may become OKR key results when validated

Tool Reference

okr_validator.py

Validates and scores OKR sets against quality criteria. Checks objectives for qualitative/inspirational language, key results for measurable outcomes, and structural completeness.

FlagTypeDefaultDescription
--input
string(required, mutually exclusive with --demo)Path to JSON file containing OKR sets
--demo
flagoffRun validation on built-in demo data (mix of good and bad OKRs)
--format
choice
text
Output format:
text
or
json

Input JSON schema:

{
  "okr_sets": [
    {
      "objective": "string (qualitative, no numbers)",
      "key_results": [
        {
          "description": "string",
          "metric": "string (unit of measurement)",
          "target_value": "number",
          "current_value": "number (baseline)"
        }
      ]
    }
  ]
}

References

  • references/okr-best-practices.md
    -- Detailed OKR guide with examples and anti-patterns
  • assets/okr_template.md
    -- OKR document template and quarterly review format