Learn-skills.dev meta-reviewing-reviewing
Code review patterns, feedback principles. Use when reviewing PRs, implementations, or making approval/rejection decisions. Covers self-correction, progress tracking, feedback principles, severity levels.
git clone https://github.com/NeverSight/learn-skills.dev
T=$(mktemp -d) && git clone --depth=1 https://github.com/NeverSight/learn-skills.dev "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/skills-md/agents-inc/skills/meta-reviewing-reviewing" ~/.claude/skills/neversight-learn-skills-dev-meta-reviewing-reviewing && rm -rf "$T"
data/skills-md/agents-inc/skills/meta-reviewing-reviewing/SKILL.mdReviewing Patterns
Quick Guide: Read ALL files completely before commenting. Provide specific file:line references for every issue. Distinguish severity (Must Fix vs Should Fix vs Nice to Have). Explain WHY, not just WHAT. Suggest solutions following existing patterns. Acknowledge good work - positive reinforcement teaches what to repeat.
<critical_requirements>
CRITICAL: Before Any Review
(You MUST read ALL files mentioned in the PR/spec completely before providing feedback)
(You MUST provide specific file:line references for every issue found)
(You MUST distinguish severity: Must Fix vs Should Fix vs Nice to Have)
(You MUST explain WHY something is an issue, not just WHAT is wrong)
(You MUST verify success criteria are met with evidence before approving)
(You MUST acknowledge what was done well - not just issues)
</critical_requirements>
Auto-detection: code review, PR review, pull request, review code, check implementation, verify changes
When to use:
- Reviewing any code changes (PRs, implementations, specs)
- Providing structured feedback on code quality
- Making approval/rejection decisions
- Ensuring codebase standards are maintained
When NOT to use:
- When implementing code (use developer skills instead)
- For automated linting/type-checking (use CI/CD tooling)
- For deep security audits (use dedicated security review)
- For high-level architecture decisions (use planning/PM workflows)
Key patterns covered:
- Self-correction checkpoints for reviewers
- Post-action reflection after reviews
- Progress tracking for multi-file reviews
- Feedback principles (specific, explain why, suggest solutions, severity, acknowledge good)
- Decision framework for approval/rejection
- Review-specific anti-patterns (scope creep, refactoring, not using utilities)
Detailed Resources:
- examples/core.md - All examples: progress tracking, feedback patterns, anti-patterns
<philosophy>
Philosophy
Code review is about improving code quality while teaching good patterns. Every piece of feedback should help the author become a better developer. Be direct but constructive.
When reviewing code:
- Always read the full context before commenting
- Base feedback on facts, not assumptions
- Distinguish blocking issues from improvements
- Teach through your feedback - explain the "why"
- Recognize good work to reinforce patterns
When NOT to be harsh:
- Don't nitpick style when code is functionally correct
- Don't request changes for personal preference
- Don't block PRs for minor issues that can be follow-ups
- Don't forget that the author worked hard on this
Core principles:
- Evidence-based: Base all feedback on what you actually read
- Actionable: Every issue should have a clear path to resolution
- Proportional: Match feedback severity to actual impact
- Educational: Help authors understand WHY, not just WHAT
- Balanced: Acknowledge good work alongside issues
<patterns>
Core Patterns
Pattern 1: Self-Correction Triggers
These checkpoints prevent review drift and ensure thorough analysis. Check yourself throughout the review process.
Self-Correction Checkpoints:
| Trigger | Correction |
|---|---|
| Providing feedback without reading full file | Stop. Read the complete file first. |
| Saying "this needs improvement" without specifics | Stop. Provide file:line references. |
| Approving without checking success criteria | Stop. Verify each criterion with evidence. |
| Focusing only on issues | Stop. Add positive feedback. |
| Making assumptions about code behavior | Stop. Read the actual implementation. |
| Flagging issues without explaining WHY | Stop. Add rationale for each issue. |
| Reviewing code outside your domain | Stop. Defer to specialist reviewer. |
Pattern 2: Post-Action Reflection
After completing your review, verify quality before finalizing.
Reflection Questions:
- Did I read all relevant files completely before commenting?
- Did I check against all success criteria in the spec?
- Are my issues specific (file:line) and actionable?
- Did I distinguish severity correctly (blocker vs improvement)?
- Did I acknowledge what was done well?
- Should any part go to a specialist reviewer?
- Is my recommendation (approve/request changes) justified?
Only finalize review when you can answer "yes" to all applicable questions.
Pattern 3: Progress Tracking
For multi-file reviews, track your progress to maintain orientation.
Track These Elements:
- Files Examined: [list of files read completely]
- Success Criteria Status: [checked/unchecked for each criterion]
- Issues Found: [categorized by severity]
- Positive Patterns Noted: [what was done well]
- Deferred Items: [what needs specialist review]
For tracking examples, see examples/core.md.
Pattern 4: Feedback Principles
All feedback should follow these principles for maximum effectiveness.
Be Specific
Every issue needs a precise location and actionable detail.
Explain Why
Don't just say what's wrong -- explain the impact so authors learn.
Suggest Solutions
Point to existing patterns when possible.
Distinguish Severity
Use clear markers to communicate priority:
| Marker | Category | Examples |
|---|---|---|
| Must Fix | Blockers - cannot approve until fixed | Security vulnerabilities, breaks functionality, missing required criteria, major convention violations |
| Should Fix | Improvements - strongly recommended | Performance optimizations, minor convention deviations, missing edge case handling, code simplification |
| Nice to Have | Suggestions - optional enhancements | Further refactoring, additional tests, documentation, future enhancements |
Acknowledge Good Work
Always include positive feedback alongside issues.
For detailed examples of each principle with rationale, see examples/core.md.
</patterns><decision_framework>
Decision Framework
Is this a blocking issue? ├─ YES → Does it affect security, functionality, or required criteria? │ ├─ Security vulnerability → MUST FIX │ ├─ Breaks existing functionality → MUST FIX │ ├─ Missing required success criteria → MUST FIX │ └─ Major convention violation → MUST FIX └─ NO → Could this code be improved? ├─ YES → Is it worth the author's time? │ ├─ Performance impact → SHOULD FIX │ ├─ Maintainability impact → SHOULD FIX │ ├─ Minor convention deviation → SHOULD FIX │ └─ Missing edge case → SHOULD FIX └─ NO → Is it a nice enhancement? ├─ Better documentation → NICE TO HAVE ├─ Additional tests → NICE TO HAVE ├─ Future improvement → NICE TO HAVE └─ Style preference → DON'T MENTION
Approval Decisions
APPROVE when:
- All success criteria are met with evidence
- Code follows existing conventions
- No critical security or performance issues
- Tests are adequate and passing
- Changes are within scope
REQUEST CHANGES when:
- Success criteria not fully met
- Convention violations exist
- Quality issues need addressing
- Test coverage inadequate
MAJOR REVISIONS NEEDED when:
- Critical security vulnerabilities
- Breaks existing functionality
- Major convention violations
- Fundamental approach issues
When uncertain: Request changes with specific questions rather than blocking indefinitely.
</decision_framework>
<red_flags>
RED FLAGS
High Priority Issues:
- Providing feedback without reading the full file
- No file:line references in issue descriptions
- Approving without verifying success criteria
- Only negative feedback, no acknowledgment of good work
- Reviewing code outside your domain expertise
- Blocking PRs for personal style preferences
Medium Priority Issues:
- Missing severity distinctions (all issues look equal)
- No suggested solutions for identified issues
- Vague feedback ("this needs improvement")
- Not checking for existing patterns before flagging "new code"
- Incomplete review (not all files examined)
Common Mistakes:
- Assuming code behavior without reading implementation
- Flagging valid patterns as "wrong" because unfamiliar
- Missing obvious issues while focusing on minor ones
- Not acknowledging improvement over previous versions
- Providing contradictory feedback (fix X, but also don't change Y)
Gotchas & Edge Cases:
- Some "duplication" is intentional for clarity - verify before flagging
- Performance optimizations may not be needed for low-traffic code
- "Convention violations" may be new patterns not yet documented
- Test coverage percentages don't guarantee quality tests
- "Out of scope" changes may be necessary dependencies
</red_flags>
<critical_reminders>
CRITICAL REMINDERS
(You MUST read ALL files mentioned in the PR/spec completely before providing feedback)
(You MUST provide specific file:line references for every issue found)
(You MUST distinguish severity: Must Fix vs Should Fix vs Nice to Have)
(You MUST explain WHY something is an issue, not just WHAT is wrong)
(You MUST verify success criteria are met with evidence before approving)
(You MUST acknowledge what was done well - not just issues)
Failure to follow these rules will produce low-quality reviews that waste author time and miss important issues.
</critical_reminders>