Babysitter code-review
Structured code quality assessment with Conventional Comments format, scaled review depth, and soft-gating verdicts preserving user autonomy.
install
source · Clone the upstream repo
git clone https://github.com/a5c-ai/babysitter
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/a5c-ai/babysitter "$T" && mkdir -p ~/.claude/skills && cp -r "$T/library/methodologies/rpikit/skills/code-review" ~/.claude/skills/a5c-ai-babysitter-code-review-9ed438 && rm -rf "$T"
manifest:
library/methodologies/rpikit/skills/code-review/SKILL.mdsource content
Code Review
Overview
Assess code quality, design, correctness, and maintainability through a structured 9-step review workflow. Uses Conventional Comments format with file-specific references.
When to Use
- After implementation phase completes
- When reviewing code changes before merge
- As part of the /review-code command
Process
- Identify modified files via git
- Assess change magnitude for review depth
- Execute 9-step review: context, correctness, design, testing, security flags, operations, maintainability
- Synthesize findings in standardized report
- Deliver verdict with rationale
Review Depth Scaling
- Under 200 lines: full detail review
- 200-1000 lines: focused review on critical areas
- Over 1000 lines: architectural-level review only
Verdicts
- APPROVE: Ready for security review
- APPROVE WITH NITS: Non-blocking suggestions only
- REQUEST CHANGES: Blocking issues exist (user may override)
Key Rules
- Provide specific file paths and line numbers
- Include at least one positive comment per review
- Use Conventional Comments format with decorations
- Explain reasoning, not just observations
- Limit critical issues to top 5 per category
- Reviews are soft gates preserving user autonomy
Tool Use
Invoke via babysitter process:
methodologies/rpikit/rpikit-review