Claude-skill-registry advanced-skill-creator
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/advanced-skill-creator" ~/.claude/skills/majiayu000-claude-skill-registry-advanced-skill-creator && rm -rf "$T"
skills/data/advanced-skill-creator/SKILL.mdAdvanced Skill Creator
Generate domain-specific skills and workflows using advanced reasoning techniques.
Purpose
This meta-skill orchestrates skill generation by:
- Analyzing task requirements
- Selecting appropriate patterns from the library
- Instantiating templates with domain parameters
- Generating executable SKILL.md files or portable prompts
Core Libraries
This skill has access to (load on demand):
— 200+ techniques across 6 cognitive categories@core/technique-taxonomy.yaml
— Standardized I/O schemas@core/artifact-contracts.yaml
— Pluggable evaluation algorithms@core/scoring-rubrics.yaml
— Parameterized workflow patterns@core/skill-patterns.yaml
— AskUserQuestion checkpoint patterns@core/checkpoint-patterns.yaml
Checkpoints
This skill uses interactive checkpoints (see
references/checkpoints.yaml) to resolve ambiguity:
- task_type_classification — When request matches multiple patterns
- domain_selection — When domain not specified
- stakeholder_selection — When multiple stakeholder types could apply
- preset_conflict — When user params conflict with presets
- anti_pattern_composition — When composition matches known anti-pattern
- output_mode — When output format not specified
- plugin_routing — When skill could belong to multiple plugins
Available Templates
| Template | Use Case | Key Parameters |
|---|---|---|
| Multi-perspective evaluation | domain, panel_size, goal |
| Pairwise comparison | items, criteria, algorithm |
| Red/blue team analysis | proposition, attack_vectors |
| Multi-LLM research design | domain, models, depth |
| Completeness assessment | document, standards |
| Job story generation | features, persona |
| Journey documentation | process, touchpoints |
| Identify features | context, granularity |
| Generate docs | doc_type, audience |
Workflow
Phase 1: Requirements Analysis
When asked to create a skill or workflow:
-
Identify the core task type:
- Evaluation/Comparison → Use MOE-EVALUATE or TOURNAMENT patterns
- Generation/Ideation → Use MOE-GENERATE pattern
- Validation/Testing → Use ADVERSARIAL-VALIDATE pattern
- Research/Discovery → Use RESEARCH-SYNTHESIZE pattern
- Documentation → Use DIATAXIS pattern
- Extraction → Use ELICIT-EXTRACT pattern
-
CHECKPOINT: task_type_classification
- If request matches multiple patterns (confidence < 0.7): AskUserQuestion
- Present top 2-3 matching patterns with descriptions
- Example: "Should this skill evaluate existing options or generate new ones?"
-
Determine parameters needed:
- Domain/context
- Stakeholders
- Quality dimensions
- Output format
-
CHECKPOINT: domain_selection
- If domain not specified and not inferable: AskUserQuestion
- Options: Architecture, Product, Strategy, Research, Custom
- Example: "What domain is this skill for?"
-
Check for existing skills that might compose:
- Can we chain existing skills?
- What's missing that requires new skill?
Phase 2: Pattern Selection
Load
@core/skill-patterns.yaml and select pattern based on:
| Task Type | Pattern | When to Use |
|---|---|---|
| Compare options | MOE-EVALUATE | 2-8 options, need expert perspectives |
| Rank many items | TOURNAMENT-RANK | >5 items, need statistical ranking |
| Generate ideas | MOE-GENERATE | Need diverse, expert-informed ideas |
| Stress-test decision | ADVERSARIAL-VALIDATE | High-stakes, need to find weaknesses |
| Synthesize research | RESEARCH-SYNTHESIZE | Multiple sources to reconcile |
| Check completeness | GAP-AUDIT | Document against standards |
| Extract requirements | ELICIT-EXTRACT | Unstructured → structured |
Phase 3: Template Instantiation
Based on selected pattern, configure template:
skill_instantiation: pattern: "[selected pattern]" domain: "[user's domain]" parameters: # Pattern-specific parameters techniques: # Selected from technique-taxonomy output_contract: # From artifact-contracts
-
CHECKPOINT: preset_conflict
- If user params conflict with domain preset: AskUserQuestion
- Example: "You specified 6 experts, but 'product' preset uses 4. Which should we use?"
-
CHECKPOINT: anti_pattern_composition
- If composition matches anti-pattern: AskUserQuestion
- Warn about the issue and offer alternatives
- Example: "Chaining ADVERSARIAL-VALIDATE → ADVERSARIAL-VALIDATE creates infinite loops"
Phase 4: Output Generation
- CHECKPOINT: output_mode
- If output format not specified: AskUserQuestion
- Options: Executable SKILL.md, Portable prompt, Both
- Example: "Should this be an executable skill or a portable prompt?"
Generate one of:
A. Executable Skill (SKILL.md)
- Complete frontmatter with triggers
- Full workflow instructions
- Quality gates
- Output templates
B. Portable Prompt
- Standalone prompt for other contexts
- Self-contained instructions
- No skill dependencies
→ Proceed to Phase 5 to save the skill in the correct location.
Phase 5: Skill Placement
Generated skills MUST be placed in the correct plugin directory for marketplace visibility:
-
Determine target plugin based on skill type:
Skill Type Target Plugin Evaluation/Comparison evaluation-toolsResearch/Discovery research-toolsDocumentation documentation-toolsPrompt optimization prompt-toolsSkill generation/Meta meta-tools -
CHECKPOINT: plugin_routing
- If skill matches criteria for 2+ plugins: AskUserQuestion
- Example: "This skill audits documentation. Should it go in evaluation-tools or documentation-tools?"
-
Create the skill directory structure:
plugins/<target-plugin>/skills/<skill-name>/ ├── SKILL.md ← Main skill definition (required) ├── references/ ← Supporting reference docs (optional) └── templates/ ← Output templates (optional) -
File naming conventions:
- Skill directory: kebab-case (e.g.,
)architecture-evaluator - SKILL.md: Required, exact filename
- References: descriptive kebab-case (e.g.,
)scoring-criteria.md
- Skill directory: kebab-case (e.g.,
Important: Skills saved outside
plugins/<plugin>/skills/ will NOT appear in the marketplace.
Output Templates
Skill Output Template
--- name: [skill-name] description: > [Description with PROACTIVELY activate for: (1)..., (2)..., (3)...] Triggers: "[trigger phrases]" --- # [Skill Name] [One-line purpose] ## When to Use [Ideal use cases and anti-patterns] ## Workflow ### Step 1: [First Step] [Instructions] ### Step 2: [Second Step] [Instructions] ## Parameters | Parameter | Default | Options | Description | |-----------|---------|---------|-------------| ## Output Format [Expected output structure] ## Quality Gates - [ ] [Gate 1] - [ ] [Gate 2] ## Examples [Example invocations and outputs]
Quality Gates
Before completing skill generation:
- Pattern selection rationale documented
- All required parameters configured
- Technique provenance traced (which techniques from taxonomy)
- Output contract specified (from artifact-contracts)
- Triggers are specific and actionable
- Description includes numbered use cases
- Target plugin identified and skill saved to
plugins/<plugin>/skills/<skill-name>/ - All applicable checkpoints evaluated (ambiguity resolved via AskUserQuestion)
Examples
Example 1: Create evaluation skill User: Create a skill for evaluating architecture proposals Advanced Skill Creator: → Task type: Evaluation/Comparison → Pattern: MOE-EVALUATE → Domain: Architecture → Generate: architecture-evaluator skill with:
Technical Authority, Quality Guardian, Risk Specialist experts Criteria: scalability, maintainability, cost, security Output: RANKED-SOLUTION-LIST contract
Example 2: Create research workflow User: Build a workflow for competitive analysis Advanced Skill Creator: → Task type: Research/Discovery → Pattern: RESEARCH-SYNTHESIZE → Domain: Competitive intelligence → Generate: competitive-analyzer skill with:
Research brief generation (Claude + Gemini) Consolidation workflow Gap identification