install
source · Clone the upstream repo
git clone https://github.com/jmagly/aiwg
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jmagly/aiwg "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.agents/skills/retrospective-analyzer" ~/.claude/skills/jmagly-aiwg-retrospective-analyzer && rm -rf "$T"
manifest:
.agents/skills/retrospective-analyzer/SKILL.mdsource content
Retrospective Analyzer
Analyze team retrospectives for insights
Instructions
-
Retrospective Setup
- Identify sprint to analyze (default: most recent)
- Check Linear MCP connection for sprint data
- Define retrospective format preference
- Set analysis time range
-
Sprint Data Collection
Quantitative Metrics
From Linear/Project Management: - Planned vs completed story points - Sprint velocity and capacity - Cycle time and lead time - Escaped defects count - Unplanned work percentage From Git/GitHub: - Commit frequency and distribution - PR merge time statistics - Code review turnaround - Build success rate - Deployment frequency
Qualitative Data Sources
1. PR review comments sentiment 2. Commit message patterns 3. Slack conversations (if available) 4. Previous retrospective action items 5. Support ticket trends
- Automated Analysis
Sprint Performance Analysis
# Sprint [Name] Retrospective Analysis ## Sprint Overview - Duration: [Start] to [End] - Team Size: [Number] members - Sprint Goal: [Description] - Goal Achievement: [Yes/Partial/No] ## Key Metrics Summary ### Delivery Metrics | Metric | Target | Actual | Variance | |--------|--------|--------|----------| | Velocity | [X] pts | [Y] pts | [+/-Z]% | | Completion Rate | 90% | [X]% | [+/-Y]% | | Defect Rate | <5% | [X]% | [+/-Y]% | | Unplanned Work | <20% | [X]% | [+/-Y]% | ### Process Metrics | Metric | This Sprint | Previous | Trend | |--------|-------------|----------|-------| | Avg PR Review Time | [X] hrs | [Y] hrs | [↑/↓] | | Avg Cycle Time | [X] days | [Y] days | [↑/↓] | | CI/CD Success Rate | [X]% | [Y]% | [↑/↓] | | Team Happiness | [X]/5 | [Y]/5 | [↑/↓] |
Pattern Recognition
## Identified Patterns ### Positive Patterns 🟢 1. **Improved Code Review Speed** - Average review time decreased by 30% - Correlation with new review guidelines - Recommendation: Document and maintain process 2. **Consistent Daily Progress** - Even commit distribution throughout sprint - No last-minute rush - Indicates good sprint planning ### Concerning Patterns 🔴 1. **Monday Deploy Failures** - 60% of failed deployments on Mondays - Possible cause: Weekend changes not tested - Action: Implement Monday morning checks 2. **Increasing Scope Creep** - 35% unplanned work (up from 20%) - Source: Urgent customer requests - Action: Review sprint commitment process
- Interactive Retrospective Facilitation
Pre-Retrospective Report
# Pre-Retrospective Insights ## Data-Driven Discussion Topics ### 1. What Went Well Based on the data, these areas showed improvement: - ✅ Code review efficiency (+30%) - ✅ Test coverage increase (+5%) - ✅ Zero critical bugs in production - ✅ All team members contributed evenly **Suggested Discussion Questions:** - What specific changes led to faster reviews? - How can we maintain zero critical bugs? - What made work distribution successful? ### 2. What Didn't Go Well Data indicates challenges in these areas: - ❌ Sprint velocity miss (-15%) - ❌ High unplanned work (35%) - ❌ 3 rollbacks required - ❌ Team overtime increased **Suggested Discussion Questions:** - What caused the velocity miss? - How can we better handle unplanned work? - What led to the rollbacks? ### 3. Action Items from Data Recommended improvements based on patterns: 1. Implement feature flags for safer deployments 2. Create unplanned work budget in sprint planning 3. Add integration tests for [problem area] 4. Schedule mid-sprint check-ins
Live Retrospective Support
During the retrospective, I can help with: 1. **Fact Checking**: "Actually, our velocity was 45 points, not 50" 2. **Pattern Context**: "This is the 3rd sprint with Monday deploy issues" 3. **Historical Comparison**: "Last time we had similar issues, we tried X" 4. **Action Item Tracking**: "From last retro, we completed 4/6 action items"
- Retrospective Output Formats
Standard Retrospective Summary
# Sprint [X] Retrospective Summary ## Participants [List of attendees] ## What Went Well - [Categorized list with vote counts] - Supporting data: [Metrics] ## What Didn't Go Well - [Categorized list with vote counts] - Root cause analysis: [Details] ## Action Items | Action | Owner | Due Date | Success Criteria | |--------|-------|----------|------------------| | [Action 1] | [Name] | [Date] | [Measurable outcome] | | [Action 2] | [Name] | [Date] | [Measurable outcome] | ## Experiments for Next Sprint 1. [Experiment description] - Hypothesis: [What we expect] - Measurement: [How we'll know] - Review date: [When to assess] ## Team Health Pulse - Energy Level: [Rating]/5 - Clarity: [Rating]/5 - Confidence: [Rating]/5 - Key Quote: "[Notable team sentiment]"
Trend Analysis Report
# Retrospective Trends Analysis ## Recurring Themes (Last 5 Sprints) ### Persistent Challenges 1. **Deployment Issues** (4/5 sprints) - Root cause still unresolved - Recommended escalation 2. **Estimation Accuracy** (5/5 sprints) - Consistent 20% overrun - Needs systematic approach ### Improving Areas 1. **Communication** (Improving for 3 sprints) 2. **Code Quality** (Steady improvement) ### Success Patterns 1. **Pair Programming** (Mentioned positively 5/5) 2. **Daily Standups** (Effective format found)
- Action Item Generation
Smart Action Items
Based on retrospective discussion, here are SMART action items: 1. **Reduce Deploy Failures** - Specific: Implement smoke tests for Monday deploys - Measurable: <5% failure rate - Assignable: DevOps team - Relevant: Addresses 60% of failures - Time-bound: By next sprint 2. **Improve Estimation** - Specific: Use planning poker for all stories - Measurable: <20% variance from estimates - Assignable: Scrum Master facilitates - Relevant: Addresses velocity misses - Time-bound: Start next sprint planning
Error Handling
No Linear Data
"Linear MCP not connected. Using git data only. Missing insights: - Story point analysis - Task-level metrics - Team capacity data Would you like to: 1. Proceed with git data only 2. Manually input sprint metrics 3. Connect Linear and retry"
Incomplete Sprint
"Sprint appears to be in progress. Current analysis based on: - [X] days of [Y] total - [Z]% work completed Recommendation: Run full analysis after sprint ends Proceed with partial analysis? [Y/N]"
Advanced Features
Sentiment Analysis
# Analyze PR comments and commit messages sentiment_indicators = { 'positive': ['fixed', 'improved', 'resolved', 'great'], 'negative': ['bug', 'issue', 'broken', 'failed', 'frustrated'], 'neutral': ['updated', 'changed', 'modified'] } # Generate sentiment report "Team Sentiment Analysis: - Positive indicators: 65% - Negative indicators: 25% - Neutral: 10% Trend: Improving from last sprint (was 55% positive)"
Predictive Insights
"Based on current patterns: ⚠️ Risk Predictions: - 70% chance of velocity miss if unplanned work continues - Deploy failures likely to increase without intervention 💡 Opportunity Predictions: - 15% velocity gain possible with proposed process changes - Team happiness likely to improve with workload balancing"
Experiment Tracking
"Previous Experiments Results: 1. 'No Meeting Fridays' (Sprint 12-14) - Result: 20% productivity increase - Recommendation: Make permanent 2. 'Pair Programming for Complex Tasks' (Sprint 15) - Result: 50% fewer defects - Recommendation: Continue with guidelines"
Integration Options
- Linear: Create action items as tasks
- Slack: Post summary to team channel
- Confluence: Export formatted retrospective page
- GitHub: Create issues for technical debt items
- Calendar: Schedule action item check-ins
Best Practices
- Data Before Discussion: Review metrics first
- Focus on Patterns: Look for recurring themes
- Action-Oriented: Every insight needs action
- Time-boxed: Keep retrospective focused
- Follow-up: Track action item completion
- Celebrate Wins: Acknowledge improvements
- Safe Space: Encourage honest feedback
References
- @$AIWG_ROOT/agentic/code/addons/aiwg-utils/rules/research-before-decision.md — Collect sprint data and previous retrospective action items before pattern analysis
- @$AIWG_ROOT/agentic/code/addons/aiwg-utils/rules/vague-discretion.md — Action items must be SMART (specific, measurable); avoid "improve process" without concrete steps
- @$AIWG_ROOT/agentic/code/addons/aiwg-utils/rules/human-authorization.md — Identify patterns and options; await team discussion before assigning retrospective action owners
- @$AIWG_ROOT/agentic/code/frameworks/sdlc-complete/skills/project-health-check/SKILL.md — Health metrics feed into retrospective data collection as quantitative evidence