Claude-skill-registry creating-subagents

Creating Subagents

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/creating-subagents" ~/.claude/skills/majiayu000-claude-skill-registry-creating-subagents && rm -rf "$T"
manifest: skills/data/creating-subagents/SKILL.md
source content

Creating Subagents

Subagents are specialized AI instances with focused expertise, isolated context, and scoped tool access.

When to Create

  • Task repeats across projects (code review, security audit)
  • Domain expertise needed (data science, API design)
  • Pipeline step requires isolation (PM → Architect → Implementer)
  • Tool access must be restricted

Skip if: one-off task, no specialization needed.

Quick Reference

FieldRequiredPurpose
name
YesLowercase, hyphens only
description
YesWhen to delegate (triggers)
tools
NoAllowlist; omit = inherit all
disallowedTools
NoDenylist; removed from inherited
permissionMode
No
default
,
acceptEdits
,
plan
, etc.
skills
NoPreload skill content into context
hooks
NoLifecycle hooks (
PreToolUse
,
Stop
)
color
NoBackground color for UI identification

Design Process

1. Define the Problem

Before writing code, answer:

  • What task does this agent solve?
  • What does success look like?
  • What tools are absolutely required?
  • What should the agent NOT do?

2. Write Description First

Description determines when agent is invoked. Write it before the prompt.

Pattern:

[Role]. [Trigger condition]. Use proactively [when].

# ❌ BAD: Vague, no trigger
description: Helps with code

# ❌ BAD: Describes process, not trigger
description: Reviews code and provides feedback

# ✅ GOOD: Clear role + trigger + proactive hint
description: Code quality reviewer. Use proactively after code changes.

3. Minimal Viable Prompt

Start with the smallest prompt that works:

---
name: code-reviewer
description: Code quality reviewer. Use proactively after code changes.
tools: Read, Glob, Grep, Bash
color: blue
---

You are a code reviewer. When invoked:

1. Run git diff to see changes
2. Review modified files
3. Report: Critical → Warnings → Suggestions

Include fix examples.

4. Test Before Deploy

Run the agent on real tasks. Document:

  • Did it activate when expected?
  • Did it have the right tools?
  • Was output useful and structured?
  • Did it stay in scope?

5. Iterate Based on Failures

When agent fails, identify the gap:

SymptomFix
Doesn't activateImprove description triggers
Wrong scopeAdd constraints to prompt
Missing contextAdd required tools or skills
Unstructured outputDefine output format in prompt
Does too muchSplit into focused agents

Tool Access by Role

RoleTools
Read-onlyRead, Grep, Glob
ResearchRead, Grep, Glob, WebFetch, WebSearch
Code modificationRead, Write, Edit, Bash, Glob, Grep
Full access(omit field — inherits all)

Common Mistakes

MistakeFix
Vague descriptionSpecific triggers: "after code changes"
Tool inheritance by defaultExplicit
tools
field for security
No structured outputDefine format in prompt
Monolithic mega-agentSplit into single-purpose agents
Missing "Use proactively"Add to description for auto-delegation
Prompt before descriptionWrite description first — it's the API

Creation Checklist

  • Problem clearly defined (what, success criteria, non-goals)
  • Description written first with trigger conditions
  • Tools explicitly listed (not inherited)
  • Prompt is minimal and focused
  • Output format defined
  • Tested on real task
  • Iterated based on failures

Pipeline Pattern

Chain agents for multi-stage workflows:

pm-spec → architect-review → implementer-tester

Each agent:

  • Has single responsibility
  • Updates status on completion
  • Hooks trigger next stage