Claude-skill-registry analyze-pr-performance
Analyze code review pipeline performance for a specific PR. Use when investigating slow PRs, identifying bottlenecks, or debugging performance issues in code reviews.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/analyze-pr-performance" ~/.claude/skills/majiayu000-claude-skill-registry-analyze-pr-performance && rm -rf "$T"
manifest:
skills/data/analyze-pr-performance/SKILL.mdsafety · automated scan (low risk)
This is a pattern-based risk scan, not a security review. Our crawler flagged:
- references .env files
Always read a skill's source content before installing. Patterns alone don't mean the skill is malicious — but they warrant attention.
source content
Analyze PR Performance
Analyze code review pipeline performance for a specific PR.
Usage
Run the analyze-pr-performance CLI script with the provided arguments:
npx ts-node scripts/analyze-pr-performance.cli.ts $ARGUMENTS
Arguments
(required): The PR number to analyzeprNumber
(required): The organization IDorgId
Options
: Number of days to search back (default: 7)--days=N
: Also search in legacy collection (observability_logs)--legacy
: Path to .env file (e.g.,--env=PATH
)--env=.env.prod
Examples
# Analyze performance for PR #558 in production /analyze-pr-performance 558 04bd288b-595a-4ee1-87cd-8bbbdc312b3c --env=.env.prod # Analyze with extended date range /analyze-pr-performance 723 97442318-9d2a-496b-a0d2-b45fb --days=14 --env=.env.prod # Analyze with legacy logs included /analyze-pr-performance 701 97442318-9d2a-496b-a0d2-b45fb --legacy --env=.env.prod
What it analyzes
-
Pipeline identification: Finds the pipelineId and correlationId for the PR
-
Stage times: Shows duration of each pipeline stage:
- ValidateNewCommitsStage
- ResolveConfigStage
- FetchChangedFilesStage
- PRLevelReviewStage
- FileAnalysisStage
- CreateFileCommentsStage
- UpdateCommentsAndGenerateSummaryStage
- And all other stages...
-
LLM calls: Details of each LLM operation:
- Operation name (analyzeCodeWithAI, selectReviewMode, kodyRulesAnalyzeCodeWithAI, etc.)
- Duration
- Model used
- Token counts (input/output)
-
Summary metrics:
- Total pipeline duration
- Total LLM calls count
- Total tokens (input/output)
- Slow calls count (> 60s)
- Models used
-
Bottlenecks: Highlights stages and LLM calls taking > 60 seconds
-
Pipeline status: Whether the pipeline completed, failed, or is unknown
Output
The script outputs:
- Pipeline and correlation IDs
- Organization and repository info
- Stage times table with duration and percentage of total
- LLM calls table with model, tokens, and duration
- Summary metrics
- Bottleneck list (stages and LLM calls > 60s)
- Final pipeline status
How to Respond
- Identify the slowest stages and explain why they might be slow
- Look for LLM calls that are taking too long (> 2 minutes is concerning)
- Check if multiple slow LLM calls are running sequentially vs in parallel
- Note any patterns (e.g., all selectReviewMode calls are slow = possible model issue)
- Suggest potential optimizations if bottlenecks are clear
- If FileAnalysisStage is slow, it's usually due to many files or large files being analyzed
- If PRLevelReviewStage is slow, check the KodyRules and PR-level analysis calls
- Compare token counts to durations - high tokens with proportional time is expected, low tokens with high time indicates API latency issues
- Report the pipeline status and check if it completed successfully