Claude-skill-registry kpi-pr-throughput

KPI for measuring and improving PR throughput. Defines metrics, measurement methods, and improvement strategies. Use to optimize how many quality PRs get merged.

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/kpi-pr-throughput" ~/.claude/skills/majiayu000-claude-skill-registry-kpi-pr-throughput && rm -rf "$T"
manifest: skills/data/kpi-pr-throughput/SKILL.md
source content

KPI: PR Throughput

Definition: The rate at which quality PRs are merged to main.

Why This Matters

PR throughput measures productive output. High throughput with maintained quality indicates efficient development. Low throughput suggests blockers in the development pipeline.

Metrics

Primary Metric

PRs Merged Per Session - Count of PRs successfully merged during an agent session.

Supporting Metrics

MetricWhat It Measures
Time to MergeLatency from PR creation to merge
Review CyclesNumber of review rounds needed
First-Pass Approval RatePRs approved without changes requested
Revert RatePRs reverted due to issues

Measurement

During Session

Track:

# Count merged PRs by this agent
gh pr list --state merged --author @me --json number | jq length

Quality Gate

Throughput only counts if:

  • PR was not reverted
  • Main branch stayed green
  • No follow-up fix PRs needed

Target Benchmarks

PerformancePRs/Session
Struggling0
Normal1-2
High3-5
Exceptional5+

Improvement Strategies

1. Scope Down Aggressively

Large PRs take longer to review and have higher failure rates. Target:

  • <300 lines changed per PR
  • Single logical change per PR
  • Clear, focused description

2. Front-load Verification

Run

./verify.sh --ui=false
before creating PR. Catching issues early avoids review cycles.

3. Write Good PR Descriptions

Clear descriptions reduce review time:

  • What changed and why
  • How to test
  • Any risks or trade-offs

4. Stack PRs When Possible

For dependent changes, create stacked PRs that can be reviewed in parallel.

5. Address Feedback Promptly

When review feedback arrives, address it immediately while context is fresh.

Blockers to Watch

BlockerDetectionMitigation
Review delayPR open >1 hourRequest review explicitly
Verification failuresverify.sh failsFix before PR, not after
Scope creepPR grows beyond original intentSplit into multiple PRs
Merge conflictsCan't merge cleanlyRebase frequently

Anti-Patterns

  • Quantity over quality - Merging broken code destroys throughput
  • Mega-PRs - One 1000-line PR is worse than five 200-line PRs
  • Skipping review - Unreviewed code accumulates debt
  • Ignoring CI - Broken verification means broken code