Claude-skill-registry claude-artifact-creator

Creates, improves, and validates Claude Code artifacts (skills, agents, commands, hooks). Use when creating domain expertise skills, specialized task agents, user-invoked commands, extending Claude Code capabilities, improving existing artifacts, reviewing artifact quality, analyzing code for automation opportunities, or consolidating duplicate artifacts.

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/claude-artifact-creator" ~/.claude/skills/majiayu000-claude-skill-registry-claude-artifact-creator && rm -rf "$T"
manifest: skills/data/claude-artifact-creator/SKILL.md
source content

Claude Artifact Creator

Creates, improves, and maintains Claude Code extensions following official best practices.

FIRST: Read Meta-Knowledge

Before creating any artifact, READ

.claude/GUIDELINES.md
for:

  • Decision framework (Skill vs Agent vs Command vs Hook)
  • Tool permissions strategy by role
  • Hook events and configuration
  • Agent/Skill/Command file formats
  • Quality checklists and anti-patterns

This skill provides quick patterns; GUIDELINES.md provides authoritative rules.

When to Use This Skill

  • Creating a new skill for domain expertise or file processing
  • Creating an agent for specialized tasks with context isolation
  • Creating a command for user-invoked shortcuts
  • Improving or refactoring existing artifacts
  • Reviewing artifacts against quality standards
  • Analyzing staged changes for automation opportunities
  • Consolidating duplicate or overlapping artifacts

Core Principle: Concise is Key

The context window is a shared resource. Before adding content, ask:

  • "Does Claude really need this?" - Claude is already smart
  • "Can this be in a reference file?" - Progressive disclosure
  • "Does this justify its token cost?" - Every line has a cost

Core Capabilities

  1. Create - Generate artifacts from templates with proper structure
  2. Improve - Enhance based on official best practices and patterns
  3. Review - Audit against quality checklist and anti-patterns
  4. Consolidate - Merge duplicate artifacts into focused ones

Three-Layer Knowledge Architecture

CRITICAL: Before creating any knowledge artifact, determine the correct layer.

See GUIDELINES.md § Three-Layer Knowledge Architecture for full details.

Quick Decision Flow

Is it a design principle that applies to ANY language?
│
├── YES → Create CONCEPT (knowledge/concepts/)
│         NO CODE, just principles
│
└── NO → Is it specific to a language/framework?
         │
         ├── YES → Create IMPLEMENTATION (knowledge/implementations/{lang}/)
         │         CODE EXAMPLES, links to concept
         │
         └── NO → Create SKILL (skills/)
                  ORCHESTRATION, references both

Creating Concepts

Location:

knowledge/concepts/{topic}/{concept}.md

Rules:

  • ✅ Framework-independent principles only
  • ✅ Why it matters (rationale)
  • ✅ How to detect violations (criteria, not code)
  • ❌ NO code examples
  • ❌ NO language-specific syntax

Template:

---
name: {Concept Name}
category: {topic}
implementations:
  dotnet: ../implementations/dotnet/{file}.md#{anchor}
  react: ../implementations/react/{file}.md#{anchor}
used_by_skills: []
---

# {Concept Name}

> "{Quote or one-liner definition}"

## The Principle

{Framework-independent explanation}

## Why It Matters

{Business/technical rationale}

## How to Detect Violations

- {Detection criteria - NO code}
- {Observable symptoms}

## Related Concepts

- [{Related Concept}](../{related}/concept.md)

## Implementations

| Language | Guide |
|----------|-------|
| C#/.NET | [Link](../../implementations/dotnet/{file}.md) |
| React | [Link](../../implementations/react/{file}.md) |

Creating Implementations

Location:

knowledge/implementations/{lang}/{file}.md

Rules:

  • ✅ Links to concept(s) it implements
  • ✅ ❌ Bad / ✅ Good code examples
  • ✅ Framework-specific notes
  • ❌ NO principle definitions (link instead)
  • ❌ NO "why" explanations (concept layer)

Template:

---
implements_concepts:
  - concepts/{topic}/{concept}
language: {csharp|typescript|python}
framework: [{dotnet|react|abp}]
---

# {Concept Name} in {Language}

## {Concept} {#anchor}

> **Concept**: [{Concept Name}](../../concepts/{topic}/{concept}.md)

### ❌ Violation

```{lang}
// Code showing anti-pattern

✅ Correct

// Code showing correct implementation

Framework-Specific Notes

{Any framework-specific considerations}


### Updating Skills for Three-Layer

When creating or updating skills, add these front matter fields:

```yaml
---
name: skill-name
applies_concepts:
  - knowledge/concepts/{topic}/{concept}
uses_implementations:
  - knowledge/implementations/{lang}/{file}
---

Governance Checklist

Before creating ANY knowledge artifact:

  • Concept exists? If creating implementation, verify concept exists first
  • No code in concept? Concepts must be code-free
  • Links bidirectional? Concept → Implementation, Implementation → Concept
  • INDEX updated? Add to
    knowledge/concepts/INDEX.md
    or
    knowledge/implementations/INDEX.md

Quick Start

Create a skill:

python scripts/init_skill.py pdf-processor --path .claude/skills --template tool

Create an agent:

python scripts/init_agent.py abp-code-reviewer --path .claude/agents --template reviewer --category reviewers

Create a command:

python scripts/init_command.py run-tests --path .claude/commands --template workflow --category tdd

Key Patterns

1. Decision Pattern

User triggers explicitly → COMMAND
Claude auto-detects → SKILL (no isolation) or AGENT (with isolation)
Deterministic on events → HOOK

2. Progressive Disclosure

Level 1: description (~100 tokens) → Trigger matching
Level 2: SKILL.md body (<5k tokens) → When activated
Level 3: references/ → On-demand deep dives

3. Artifact Limits

See GUIDELINES.md § Size Limits for authoritative limits.

Quick reference: Skills <500, Agents <150, References one level deep.

4. Description Pattern

# Good: Third-person with triggers
description: Processes PDF files for text extraction and form filling.
  Use when working with PDFs, extracting text, or filling forms.

# Bad: First-person or vague
description: I can help you with documents

YAML Validation Rules

FieldRequirements
name
Max 64 chars, lowercase, hyphens only, no reserved words (anthropic, claude)
description
Max 1024 chars, third-person voice, 3+ trigger scenarios, no XML tags
tools
Comma-separated; omitting grants ALL tools (including MCP)
model
haiku
(fast),
sonnet
(balanced),
opus
(powerful)
permissionMode
default
,
acceptEdits
,
bypassPermissions

Decision Flowchart

See GUIDELINES.md § Choosing the Right Tool for the full decision matrix.

Quick reference: Command (user triggers) → Skill (auto, no isolation) → Agent (auto, isolated) → Hook (deterministic events)

Creation Workflow

Step 1: Identify Type

Type Decision Checklist:
- [ ] User invokes with /command? → Command
- [ ] Needs separate context window? → Agent
- [ ] Auto-triggered domain knowledge? → Skill
- [ ] Shell action on tool events? → Hook

Step 2: Gather Requirements

Skills: Trigger scenarios (3+), resources needed, primary workflow Agents: Team role, tools needed (least privilege), permission mode Commands: Arguments, phases, expected output

Step 3: Initialize

TypeCommand
Skill
python scripts/init_skill.py <name> --template <type>
Agent
python scripts/init_agent.py <name> --template <type> --category <cat>
Command
python scripts/init_command.py <name> --template <type> --category <cat>

Templates:

  • Skills:
    default
    ,
    tool
    ,
    workflow
    ,
    domain
    ,
    analysis
    ,
    integration
    ,
    generator
    ,
    pattern
  • Agents:
    architect
    ,
    reviewer
    ,
    developer
    ,
    coordinator
    ,
    specialist
  • Commands:
    review
    ,
    generate
    ,
    debug
    ,
    workflow
    ,
    git
    ,
    refactor

Step 4: Test

Testing Checklist:
- [ ] Customize all placeholders ([DOMAIN], [TARGET])
- [ ] Test with Haiku - enough guidance?
- [ ] Test with Sonnet - clear and efficient?
- [ ] Test with Opus - not over-explained?
- [ ] Verify triggers activate correctly

Built-in Subagents

Claude Code includes built-in agents (cannot be modified):

AgentPurposeMode
PlanResearch before presenting planRead-only, plan mode
ExploreFast codebase searchRead-only (
ls
,
find
,
cat
,
head
,
tail
)

Note: Subagents cannot spawn other subagents.

Tool Permissions & Hook Events

⚠️ Warning: Omitting

tools
grants ALL tools including MCP. Always whitelist explicitly.

Full reference: See GUIDELINES.md § Agents for tool permissions by role, and GUIDELINES.md § Hooks for hook events.

Best Practices

  1. Concise over comprehensive - Claude is smart; add only what it doesn't know
  2. Show, don't tell - Examples beat descriptions
  3. Third-person descriptions - Required for system prompt injection
  4. 3+ trigger scenarios - Specific scenarios in description ensure activation
  5. Least privilege tools - Only grant necessary tools
  6. One level deep references - No nested references (causes partial reads)
  7. Test all models - What works for Opus may need more detail for Haiku
  8. Validate before shipping - Run
    scripts/validate.py --strict

Common Pitfalls

PitfallDetectionFix
Vague triggersDescription <100 charsAdd 3+ specific scenarios
Abstract onlyNo code blocksAdd before/after examples
Monolithic>500 linesMove to references/
Kitchen sinkLists 5+ domainsCreate specialized artifacts
Embedded code in agentCode blocks in agentExtract to skill
First-person description"I can help"Use third-person

See references/anti-patterns.md for comprehensive list.

Agent Refactoring

When agents grow >150 lines, extract embedded content to skills, commands, or docs.

Full guide: See agent-refactoring-guide.md

Agent Optimization Patterns

For comprehensive agent optimization, see agent-optimization-patterns.md.

Key techniques:

TechniquePurpose
Semantic skill categorizationOrganize skills by METHODOLOGY, DOMAIN, LENS, OUTPUT
Explicit workflow pipelineDeclarative
GATHER → ANALYZE → REPORT
in frontmatter
Skill invocation guidancePhase-to-skill mapping with fallback checks
Project-agnostic designDynamic context loading from docs/
Output externalizationDedicated format skill for report templates

Quick optimization checklist:

- [ ] Skills categorized semantically (not alphabetically)
- [ ] Workflow defined as pipeline with phase-to-skill mapping
- [ ] Fallback checks provided for skill failures
- [ ] No hardcoded project names/paths
- [ ] Output template in dedicated skill
- [ ] Quality self-check categorized by concern
- [ ] Agent size <150 lines

Agent Knowledge Profiles

Agents declare their knowledge profile in YAML front matter to enable validation and discoverability.

See GUIDELINES.md § Artifact Knowledge Rules for full governance.

Required Front Matter Fields

FieldPurposeFormat
understands
Concepts agent knowsList of paths relative to
knowledge/concepts/
applies
Implementations agent usesList of paths relative to
knowledge/implementations/

Example

---
name: abp-developer
understands:
  - solid/srp
  - solid/dip
  - clean-code/naming
  - clean-architecture/layers
applies:
  - dotnet/solid
  - dotnet/clean-code
---

Validation Rules

  1. Path validation: All
    understands
    paths must exist in
    knowledge/concepts/INDEX.md
  2. Path validation: All
    applies
    paths must exist in
    knowledge/implementations/INDEX.md
  3. Wildcard support: Use
    solid/*
    to include all concepts in a category
  4. Role requirements: Agents must meet minimum counts for their category

Role Requirements

Role (folder)
understands
(min)
applies
(min)
reviewers/
31
engineers/
22
architects/
30
specialists/
10

Validation Checklist

When creating or updating agents:

  • All
    understands
    paths exist in concepts INDEX
  • All
    applies
    paths exist in implementations INDEX
  • Agent meets minimum requirements for its role category
  • Skills align with knowledge profile (related domains)

Cross-reference: See ARTIFACT-KNOWLEDGE-MATRIX.md for full coverage.

Success Metrics

Track these for artifact quality:

MetricTarget
Trigger accuracyActivates on relevant requests
Output consistencySame quality across similar inputs
Model compatibilityWorks with Haiku, Sonnet, Opus
Line countSkills <500, Agents <150
Description length100-1024 chars with triggers

Quality Checklist

See GUIDELINES.md § Quality Checklists for authoritative checklists.

Quick validation:

  • Description: third-person, 100-1024 chars, 3+ triggers
  • Name: lowercase, hyphens, max 64 chars
  • Under line limits (per GUIDELINES.md)
  • Tested with Haiku, Sonnet, and Opus

Integration Patterns

Command → Agent → Skill

/add-feature (command)
  └─ Uses backend-architect (agent)
       └─ Applies api-design-principles (skill)

Skill as Knowledge, Command as Action

Rule: "Knowing" = Skill, "Doing" = Command
      "Doing with Knowledge" = Command referencing Skills

References

Primary (READ FIRST):

Skill-Specific:

Project Context:

  • CLAUDE.md
    - Project-specific values and quick references