Claude-skill-registry ai-code-reviewer
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/ai-code-reviewer" ~/.claude/skills/majiayu000-claude-skill-registry-ai-code-reviewer && rm -rf "$T"
skills/data/ai-code-reviewer/SKILL.mdAI Code Reviewer Skill
Purpose
Leverages external AI models (OpenAI Codex CLI and Google Gemini CLI) for deep code analysis beyond Claude's built-in capabilities. Provides multi-perspective code reviews with result aggregation and consensus scoring.
Prerequisites
At least one of the following must be installed and authenticated:
- Codex CLI: Run
to authenticatecodex auth - Gemini CLI: Run
to authenticategemini auth login
When to Use
- Deep security analysis requiring external AI perspective
- Performance optimization requests needing specialized analysis
- Multi-model code review for high-confidence findings
- Large codebase analysis with result caching
- Secret and credential detection in code
Available MCP Tools
analyze_code_with_codex
Uses OpenAI Codex for comprehensive code analysis.
- Best for: General code review, bug detection, logical errors
- Input: Code snippet with optional context (project type, language, focus areas)
- Output: Structured findings with severity levels and suggestions
analyze_code_with_gemini
Uses Google Gemini for code analysis.
- Best for: Performance analysis, architectural review, style consistency
- Input: Code snippet with optional context
- Output: Structured findings with code examples
analyze_code_combined
Aggregates results from both Codex and Gemini with deduplication.
- Best for: High-stakes reviews requiring consensus
- Features:
- Parallel execution for speed
- Result deduplication with similarity threshold
- Confidence scoring based on agreement
- Output: Merged findings with source attribution
scan_secrets
Detects hardcoded secrets, API keys, credentials, and sensitive data.
- Best for: Pre-commit security checks
- Patterns: AWS, GCP, Azure, GitHub, database credentials, private keys
- Excludes: Test files, mock files by default
- Output: Secret findings with severity and remediation suggestions
get_analysis_status
Retrieves status of async analysis operations.
- Input: Analysis ID from previous tool call
- Output: Status (pending/in_progress/completed/failed), result or error
Workflow
Step 1: Determine Analysis Type
Ask user for analysis preference:
Question: "What type of AI analysis do you need?"
Options:
- Quick Review - Single model, faster (Codex OR Gemini)
- Deep Review - Combined models with consensus scoring
- Security Scan - Secret detection only
- Performance Focus - Optimization-focused review
- Full Audit - Combined + secret scan
Step 2: Model Selection (if Quick Review)
Question: "Which AI model should be used?"
Options:
- Codex (OpenAI) - Better for bug detection, logical errors
- Gemini (Google) - Better for architectural patterns, style
Step 3: Set Context (Optional)
Question: "What's the project context?"
Options:
- Auto-detect - Infer from code
- Web App (React/Vue) - Frontend focus
- API (Node/Express) - Backend focus
- MCP Server - Protocol focus
- CLI Tool - User tool focus
- Library - Reusability focus
Step 4: Execute Analysis
Call the appropriate MCP tool based on selections.
Step 5: Present Results
Format findings in structured markdown with:
- Overall assessment
- Summary statistics
- Grouped findings by severity
- Actionable recommendations
Response Template
## AI Code Review Results **Analysis ID**: [id] **Models Used**: [codex/gemini/combined] **Cache Status**: [hit/miss] **Duration**: [Xms] ### Overall Assessment [AI-generated overall assessment of code quality] ### Summary | Severity | Count | |----------|-------| | Critical | X | | High | X | | Medium | X | | Low | X | | **Total** | **X** | ### Findings #### Critical Issues 1. **[Title]** (Line X) - **Description**: [...] - **Suggestion**: [...] - **Code**: `[snippet]` #### High Priority [...] ### Recommendations 1. [Prioritized action item 1] 2. [Prioritized action item 2] 3. [...] --- *Analysis by [model(s)] | Confidence: [X]% | Duration: [X]ms*
Integration Notes
- Works alongside
for comprehensive analysiscode-reviewer - Complements
with external AI perspectivesecurity-scanner - Results are cached (1 hour TTL) for repeated queries
- Secret scanning runs locally, no external API calls
- Triggered by
express command/cr
Error Handling
- CLI Not Found: Gracefully reports missing CLI, suggests installation
- Authentication Failed: Guides user through auth process
- Timeout: Returns partial results with warning
- Rate Limited: Queues requests with exponential backoff
Performance Notes
- Combined analysis runs in parallel by default
- Cache reduces repeated analysis costs
- Large files are truncated with warning
- SQLite storage for persistent cache