Claude-skill-registry-data marketplace-analysis
Use when reviewing plugin quality, auditing plugins, analyzing the marketplace, checking plugins against Anthropic standards, or evaluating plugin architecture - provides systematic analysis methodology with validation framework
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry-data
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry-data "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/marketplace-analysis" ~/.claude/skills/majiayu000-claude-skill-registry-data-marketplace-analysis && rm -rf "$T"
manifest:
data/marketplace-analysis/SKILL.mdsource content
Marketplace Analysis
Analyze Claude Code plugins to achieve Anthropic-level quality standards.
Core Philosophy
Anthropic Quality Bar: Same or more functionality with leaner, more efficient implementation.
Principles:
- Systems thinking over point fixes
- Elegant simplicity over feature accumulation
- Proven improvements over assumptions
- Deletion over addition
Analysis Process
1. Quick Scan
- Count plugins and components
- Note obvious issues (large files, naming inconsistencies)
- Flag files >500 lines
2. Deep Analysis (per plugin)
- Read SKILL.md files - check trigger phrases, writing style
- Read agent descriptions - check triggering examples
- Read commands - check argument handling
- Check hooks - validate event usage
- Map interactions - how components work together
3. Cross-Plugin Analysis
- Find redundancy across plugins
- Check consistency (naming, patterns, styles)
- Identify gaps and conflicts
4. Reference Validation
For each skill, verify bundled references exist:
-
Extract paths from SKILL.md:
mentionsreferences/*.md
orscripts/*.sh
mentionsscripts/*.py- Markdown links:
[text](relative/path)
-
Validate each path:
- Resolve relative to skill directory
- Check file exists with Glob
- Flag missing as "broken reference"
-
Report:
- Missing references = Priority 1 errors
- Orphaned files (exist but not referenced) = Priority 3 notes
Anti-Overengineering Checks
Before proposing ANY change:
- Is this simpler than the original?
- Does this solve a real problem?
- Would a new user understand this?
- Can I remove instead of add?
Red flags:
- Adding abstraction for one use case
- "Might need this later" reasoning
- Recommending deletion based on filename alone
Output Format
## Priority 1: High Impact, Low Effort - [ ] [Change] - [Why] - [Expected impact] - [How to validate] ## Priority 2: Medium Impact ... ## Priority 3: Consider Later ...
Each recommendation includes validation approach.
References
For detailed guidance:
- Official Anthropic skill-creator guide (authoritative source for skill structure, frontmatter, progressive disclosure)references/skill-design-standards.md
- Quality criteria checklist, anti-patterns (includes summary of official standards)references/quality-standards.md
- Metrics, user testing, validation templatesreferences/measuring-improvements.md
- Template and examples patterns for consistent outputreferences/output-patterns.md
- Sequential and conditional workflow patternsreferences/workflows.md
Use
scripts/analyze-metrics.sh for consistent metric collection.
Consulting Documentation
Verify best practices via
claude-code-guide subagent before claiming something is "wrong."
Applying Changes
When implementing improvements:
- Before any changes: Create TodoWrite items for each improvement
- Apply changes: Use Edit tool, one logical change at a time
- MANDATORY verification: Use
skill before claiming completecore:verification - Evidence required: Run validation commands, report actual output
Never claim "improved" or "fixed" without verification evidence.