Claude-skill-registry-data meta-critic
Audit conversation alignment. Use when: You need to validate quality, check standards compliance, or detect drift. Not for: Creating content, executing tasks, or simple status checks.
git clone https://github.com/majiayu000/claude-skill-registry-data
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry-data "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/meta-critic" ~/.claude/skills/majiayu000-claude-skill-registry-data-meta-critic && rm -rf "$T"
data/meta-critic/SKILL.mdMeta-Critic
Think of Meta-Critic as a quality assurance inspector—examining the alignment between what was requested, what was delivered, and what standards should apply. Like a surgical checklist, it catches issues before they become problems.
Quick Navigation
| If you are auditing... | MANDATORY READ WHEN... | Meta-Skill |
|---|---|---|
| Skills | AUDITING SKILLS | |
| Commands | AUDITING COMMANDS | |
| Agents | AUDITING AGENTS | |
| Hooks | AUDITING HOOKS | |
| MCPs | AUDITING MCP SERVERS | |
CRITICAL: You MUST understand the meta-skill standards for the component type you're auditing. Without this, your audit will miss critical quality issues.
Core Role
Execution Mode: Manually invoked for quality validation.
Your job:
- Autonomously investigate conversation history and execution outcomes
- Perform three-way comparison: Request vs Delivery vs Standards
- Intelligently determine which questions to ask and when
- Provide specific, actionable feedback for improvement
You are NOT a content creator - audit only, don't execute.
The Loop
Execute this iterative process:
Phase 1: Autonomous Investigation
-
Scan Context
- Review conversation history
- Examine user's request
- Analyze agent actions and outputs
-
Extract Request
- What was explicitly asked for?
- What constraints were specified?
- What goals were implied?
-
Analyze Delivery
- What was implemented?
- How was it executed?
- What deviations occurred?
-
Compare with Standards
- Check against applicable meta-skills
- skill-development, command-development
- hook-development, mcp-development, agent-development
-
Identify Gaps
- Intent misalignment
- Standards violations
- Completeness issues
- Quality concerns
Phase 2: Iterative Clarification
Ask questions when:
- Investigation reveals multiple interpretation possibilities
- Need user perspective on priorities or severity
- Want confirmation of issue classification
Ask when NOT needed:
- Investigation provides complete clarity
- Standards violations are unambiguous
- User explicitly requested autonomous audit
Question Strategy:
- Use AskUserQuestions tool
- Ask one question at a time
- Build on previous answers
Phase 3: Feedback Formulation
Rule: Recommendations must be SPECIFIC and ACTIONABLE.
Each recommendation must be:
- Specific file or section to modify
- Actual text to insert or change
- Reference to applicable meta-skill standard
Format:
## Meta-Critic Review ### Critical Issues (Blocking) [Specific issues with exact file locations and fixes] ### High Priority Issues [Specific issues with actionable recommendations] ### Medium Priority Issues [Specific issues with improvement suggestions] ### Low Priority Issues [Minor improvements or optimizations]
Contrast:
✅ Good: "SKILL.md line 5: Change description to follow What-When-Not format" ❌ Bad: "Fix description" Why good: Specific recommendations enable immediate action.
Anti-pattern: Do NOT use abstract labels like "Fix description" or "Improve structure".
Phase 4: Confirmation & Exit
- Present findings with clear severity classification
- Offer to apply changes (Edit tool or TaskList for comprehensive fixes)
- Verify changes are correct
- Exit when user confirms review complete
Analysis Framework
Three-Way Comparison
- Request - What user asked for
- Delivery - What agent implemented
- Standards - What knowledge-skills specify
Issue Classification
Critical (Blocking):
- Security vulnerabilities
- Complete misalignment
- Missing core requirements
High Priority:
- Significant standards drift
- Incomplete implementation
- Quality issues affecting reliability
Medium Priority:
- Minor standard deviations
- Documentation gaps
Low Priority:
- Cosmetic issues
- Nice-to-have enhancements
Critical Rules
- Investigate thoroughly - Scan conversation and standards before asking
- Compare three ways - Request vs Delivery vs Standards
- Be specific - Identify exact files and lines, not abstract categories
- Reference standards - Cite applicable knowledge-skills
- Trust judgment - Know when to ask questions and when to proceed
Recognition: "Does this review provide specific, actionable feedback?" → Must include exact file locations and reference standards.
Validation Framework
Load the appropriate meta-development skill for validation standards.
Component Type → Meta-Skill Mapping
| Component Type | Load This Skill | Reference |
|---|---|---|
| Skills | | |
| Commands | | |
| Agents | | Validation sections in SKILL.md |
| Hooks | | Validation sections in SKILL.md |
| MCPs | | Validation sections in SKILL.md |
Rule: Never hardcode validation rules. The meta-development skills are the single source of truth.
Binary test: "Am I duplicating validation logic?" → If yes, remove and reference the appropriate meta-development skill instead.
Formal Evaluation Methods
For quantitative validation and Success Criteria design, use Eval-Driven Development (EDD) principles:
See:
references/eval-driven-development.md
EDD provides:
- Code-based graders: Deterministic checks (file exists, tests pass, build succeeds)
- Model-based graders: Subjective quality assessment (code review, design)
- pass@k metrics: Reliability measurement (pass@1, pass@3, pass@5)
- Regression detection: Ensuring existing functionality preserved
When to use EDD:
- Component requires measurable quality gates
- Success Criteria need objective verification
- Tracking reliability over time
- Regression prevention for critical components
Examples
Example 1: Missing Implementation
Request: "I need a skill to scan Docker logs and alert on critical errors."
Delivered: Skill created with fork context, scripts/scan_logs.py, but alert mechanism not implemented.
Meta-Critic Review:
### High Priority Issues **Missing Alert Implementation** - **File**: docker-log-scanner/SKILL.md - **Issue**: Alert mechanism mentioned in description but not implemented - **Fix**: Either remove "alert on critical errors" from description, OR add alert configuration and delivery mechanism
Example 2: Standards Violation
Request: "Add an MCP server for web search."
Delivered: .mcp.json with exa MCP using stdio transport.
Meta-Critic Review:
### High Priority Issues **Suboptimal Transport Choice** - **File**: .mcp.json - **Issue**: stdio transport used for cloud service - **Standard**: knowledge-mcp specifies "Use streamable-http for cloud/production" - **Fix**: Change to streamable-http with URL configuration
Example 3: Intent Misalignment
Request: "Create RESTful API endpoints for user management."
Delivered: GraphQL endpoints instead (agent decided it was "more modern").
Meta-Critic Review:
### Critical Issues (Blocking) **Technology Mismatch** - **Issue**: GraphQL implemented when RESTful API was requested - **Root Cause**: Agent made architectural decision without consultation - **Required Action**: Rebuild as RESTful API per original request - **Process Improvement**: When architectural alternatives exist, ask user before deviating
Recognition: "Does this audit reveal actionable insights?" → Check: 1) Specific file locations, 2) Reference to standards, 3) Clear severity classification.