Some_claude_skills skill-creator
Use this skill when creating a new Claude skill from scratch, editing or improving an existing skill, or measuring skill performance with evals and benchmarks. Invoke whenever the user says things like "make a skill for X", "turn this workflow into a skill", "test my skill", "improve my skill", "run evals", "benchmark this", or "optimize my skill description". Also use proactively when the conversation has produced a repeatable workflow that would benefit from being captured as a skill. Covers the full lifecycle: capture intent → draft SKILL.md → run evals → review with user → iterate → optimize description → package. NOT for general coding help, debugging runtime errors, building MCP servers, writing Claude hooks, or creating plugins — use domain-specific skills for those.
git clone https://github.com/curiositech/some_claude_skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/curiositech/some_claude_skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/corpus/meta-skills-experiment/cross-improved/skill-creator" ~/.claude/skills/curiositech-some-claude-skills-skill-creator-099adc && rm -rf "$T"
corpus/meta-skills-experiment/cross-improved/skill-creator/SKILL.mdSkill Creator
A skill for creating new skills and iteratively improving them.
At a high level, the process of creating a skill goes like this:
- Decide what you want the skill to do and roughly how it should do it
- Write a draft of the skill
- Create a few test prompts and run claude-with-access-to-the-skill on them
- Help the user evaluate the results both qualitatively and quantitatively
- While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
- Use the
script to show the user the results for them to look at, and also let them look at the quantitative metricseval-viewer/generate_review.py
- Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
- Repeat until you're satisfied
- Expand the test set and try again at larger scale
Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
Cool? Cool.
flowchart LR A[Understand intent] --> B[Draft SKILL.md] B --> C[Run test cases] C --> D[Grade & review with user] D --> E{Satisfied?} E -->|No| F[Improve skill] F --> C E -->|Yes| G[Optimize description] G --> H[Package & ship]
Communicating with the user
The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
- "evaluation" and "benchmark" are borderline, but OK
- for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them
It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
Creating a skill
Capture Intent
Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
- What should this skill enable Claude to do?
- When should this skill trigger? (what user phrases/contexts)
- What's the expected output format?
- Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide.
Interview and Research
Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
Write the SKILL.md
Based on the user interview, fill in these components:
- name: Skill identifier
- description: When to trigger, what it does. This is the primary triggering mechanism - include both what the skill does AND specific contexts for when to use it. All "when to use" info goes here, not in the body. Note: currently Claude has a tendency to "undertrigger" skills -- to not use them when they'd be useful. To combat this, please make the skill descriptions a little bit "pushy". So for instance, instead of "How to build a simple fast dashboard to display internal Anthropic data.", you might write "How to build a simple fast dashboard to display internal Anthropic data. Make sure to use this skill whenever the user mentions dashboards, data visualization, internal metrics, or wants to display any kind of company data, even if they don't explicitly ask for a 'dashboard.'"
- compatibility: Required tools, dependencies (optional, rarely needed)
- the rest of the skill :)
Skill Writing Guide
Anatomy of a Skill
skill-name/ ├── SKILL.md (required) │ ├── YAML frontmatter (name, description required) │ └── Markdown instructions └── Bundled Resources (optional) ├── scripts/ - Executable code for deterministic/repetitive tasks ├── references/ - Docs loaded into context as needed └── assets/ - Files used in output (templates, icons, fonts)
Progressive Disclosure
Skills use a three-level loading system:
- Metadata (name + description) - Always in context (~100 words)
- SKILL.md body - In context whenever skill triggers (<500 lines ideal)
- Bundled resources - As needed (unlimited, scripts can execute without loading)
These word counts are approximate and you can feel free to go longer if needed.
Key patterns:
- Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up.
- Reference files clearly from SKILL.md with guidance on when to read them
- For large reference files (>300 lines), include a table of contents
Domain organization: When a skill supports multiple domains/frameworks, organize by variant:
cloud-deploy/ ├── SKILL.md (workflow + selection) └── references/ ├── aws.md ├── gcp.md └── azure.md
Claude reads only the relevant reference file.
Principle of Lack of Surprise
This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
Writing Patterns
Prefer using the imperative form in instructions.
Defining output formats - You can do it like this:
## Report structure ALWAYS use this exact template: # [Title] ## Executive summary ## Key findings ## Recommendations
Examples pattern - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
## Commit message format **Example 1:** Input: Added user authentication with JWT tokens Output: feat(auth): implement JWT-based authentication
Writing Style
Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
Test Cases
After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.
Save test cases to
evals/evals.json. Don't write expectations yet — just the prompts. You'll draft expectations (verifiable assertions about what the output should contain) in the next step while the runs are in progress.
{ "skill_name": "example-skill", "evals": [ { "id": 1, "prompt": "User's task prompt", "expected_output": "Description of expected result", "files": [] } ] }
See
references/schemas.md for the full schema (including the expectations field, which you'll add later).
Running and evaluating test cases
This section is one continuous sequence — don't stop partway through. Do NOT use
/skill-test or any other testing skill.
Put results in
<skill-name>-workspace/ as a sibling to the skill directory. Within the workspace, organize results by iteration, then by eval name, then by configuration, then by run. Don't create all of this upfront — just create directories as you go.
<skill-name>-workspace/ └── iteration-1/ ├── eval-<name>/ ← one per test case (descriptive name) │ ├── eval_metadata.json ← eval ID, prompt, assertions │ ├── with_skill/ │ │ └── run-1/ │ │ ├── outputs/ ← files the subagent produced │ │ │ └── metrics.json │ │ ├── timing.json ← capture from task notification │ │ └── grading.json ← written by grader agent │ └── without_skill/ ← or old_skill/ when improving │ └── run-1/ │ ├── outputs/ │ ├── timing.json │ └── grading.json ├── benchmark.json ← written by aggregate_benchmark.py └── benchmark.md
Step 1: Spawn all runs (with-skill AND baseline) in the same turn
For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
With-skill run:
Execute this task: - Skill path: <path-to-skill> - Task: <eval prompt> - Input files: <eval files if any, or "none"> - Save outputs to: <workspace>/iteration-<N>/eval-<name>/with_skill/run-1/outputs/ - Outputs to save: <what the user cares about — e.g., "the .docx file", "the final CSV">
Baseline run (same prompt, but the baseline depends on context):
- Creating a new skill: no skill at all. Same prompt, no skill path, save to
.without_skill/run-1/outputs/ - Improving an existing skill: the old version. Before editing, snapshot the skill (
), then point the baseline subagent at the snapshot. Save tocp -r <skill-path> <workspace>/skill-snapshot/
.old_skill/run-1/outputs/
Write an
eval_metadata.json for each test case (expectations can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
{ "eval_id": 0, "eval_name": "descriptive-name-here", "prompt": "The user's task prompt", "expectations": [] }
Step 2: While runs are in progress, draft expectations
Don't just wait for the runs to finish — you can use this time productively. Draft quantitative expectations for each test case and explain them to the user. If expectations already exist in
evals/evals.json, review them and explain what they check.
Good expectations are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force expectations onto things that need human judgment.
Update the
eval_metadata.json files and evals/evals.json with the expectations once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
Step 3: As runs complete, capture timing data
When each subagent task completes, you receive a notification containing
total_tokens and duration_ms. Save this data immediately to timing.json in the run directory:
{ "total_tokens": 84852, "duration_ms": 23332, "total_duration_seconds": 23.3 }
This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
Step 4: Grade, aggregate, and launch the viewer
Once all runs are done:
-
Grade each run — spawn a grader subagent (or grade inline) that reads
and evaluates each expectation against the outputs. Save results toagents/grader.md
in each run directory. The grading.json expectations array must use the fieldsgrading.json
,text
, andpassed
(notevidence
/name
/met
or other variants) — the viewer depends on these exact field names. For expectations that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.detailsUse a prompt like this for each grader subagent:
You are a grader agent. Read agents/grader.md from the skill-creator directory to load your full instructions, then grade this eval run: - Expectations: ["The output file is a valid PDF", "All form fields are filled"] - Transcript path: <workspace>/iteration-N/eval-<name>/with_skill/run-1/transcript.md - Outputs dir: <workspace>/iteration-N/eval-<name>/with_skill/run-1/outputs/ Save grading.json to <workspace>/iteration-N/eval-<name>/with_skill/run-1/grading.json.Spawn one grader per run (with_skill and without_skill/old_skill each get their own grader).
-
Aggregate into benchmark — run the aggregation script from the skill-creator directory:
python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>This produces
andbenchmark.json
with pass_rate, time, and tokens for each configuration, with mean ± stddev and the delta. If generating benchmark.json manually, seebenchmark.md
for the exact schema the viewer expects. Put each with_skill version before its baseline counterpart.references/schemas.md -
Do an analyst pass — read the benchmark data and surface patterns the aggregate stats might hide. See
(the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.agents/analyzer.md -
Launch the viewer with both qualitative outputs and quantitative data:
nohup python <skill-creator-path>/eval-viewer/generate_review.py \ <workspace>/iteration-N \ --skill-name "my-skill" \ --benchmark <workspace>/iteration-N/benchmark.json \ > /dev/null 2>&1 & VIEWER_PID=$!For iteration 2+, also pass
.--previous-workspace <workspace>/iteration-<N-1>Cowork / headless environments: If
is not available or the environment has no display, usewebbrowser.open()
to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a--static <output_path>
file when the user clicks "Submit All Reviews". After download, copyfeedback.json
into the workspace directory for the next iteration to pick up.feedback.json
Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
- Tell the user something like: "I've opened the results in your browser. There are two tabs — 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know."
What the user sees in the viewer
The "Outputs" tab shows one test case at a time:
- Prompt: the task that was given
- Output: the files the skill produced, rendered inline where possible
- Previous Output (iteration 2+): collapsed section showing last iteration's output
- Formal Grades (if grading was run): collapsed section showing assertion pass/fail
- Feedback: a textbox that auto-saves as they type
- Previous Feedback (iteration 2+): their comments from last time, shown below the textbox
The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to
feedback.json.
Step 5: Read the feedback
When the user tells you they're done, read
feedback.json:
{ "reviews": [ {"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."}, {"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."}, {"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."} ], "status": "complete" }
Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
Kill the viewer server when you're done with it:
kill $VIEWER_PID 2>/dev/null
Improving the skill
This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
How to think about improvements
-
Generalize from the feedback. The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
-
Keep the prompt lean. Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
-
Explain the why. Try hard to explain the why behind everything you're asking the model to do. Today's LLMs are smart. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
-
Look for repeated work across test cases. Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a
or acreate_docx.py
, that's a strong signal the skill should bundle that script. Write it once, put it inbuild_chart.py
, and tell the skill to use it. This saves every future invocation from reinventing the wheel.scripts/
This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
The iteration loop
After improving the skill:
- Apply your improvements to the skill
- Rerun all test cases into a new
directory, including baseline runs. If you're creating a new skill, the baseline is alwaysiteration-<N+1>/
(no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.without_skill - Launch the reviewer with
pointing at the previous iteration--previous-workspace - Wait for the user to review and tell you they're done
- Read the new feedback, improve again, repeat
Keep going until:
- The user says they're happy
- The feedback is all empty (everything looks good)
- You're not making meaningful progress
Advanced: Blind comparison
For situations where you want a more rigorous comparison between two versions of a skill, read
agents/comparator.md and agents/analyzer.md for the details. The basic idea: give two outputs to an independent agent without telling it which is which, and let it judge quality.
This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
Description Optimization
The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.
Read
references/description-optimization.md for detailed guidance on writing high-quality eval queries and how the triggering mechanism works. The high-level steps are:
Step 1: Generate trigger eval queries
Create 20 eval queries — a mix of should-trigger (8-10) and should-not-trigger (8-10). Focus on realistic, edge-case queries with enough backstory to be believable. The most valuable negative cases are near-misses — adjacent domains or ambiguous phrasing where a naive match would trigger but shouldn't. See
references/description-optimization.md for query writing guidance and examples.
Save as JSON:
[ {"query": "the user prompt", "should_trigger": true}, {"query": "another prompt", "should_trigger": false} ]
Step 2: Review with user
Present the eval set to the user using
assets/eval_review.html: replace __EVAL_DATA_PLACEHOLDER__, __SKILL_NAME_PLACEHOLDER__, and __SKILL_DESCRIPTION_PLACEHOLDER__, write to /tmp/eval_review_<skill-name>.html, and open it. The user edits and exports eval_set.json.
Step 3: Run the optimization loop
python -m scripts.run_loop \ --eval-set <path-to-trigger-eval.json> \ --skill-path <path-to-skill> \ --model <model-id-powering-this-session> \ --max-iterations 5 \ --verbose
This splits the eval set 60/40 train/test, evaluates each description 3 times per query for reliability, and selects
best_description by test score (not train score) to avoid overfitting.
Step 4: Apply the result
Take
best_description from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.
Package and Present (only if present_files
tool is available)
present_filesCheck whether you have access to the
present_files tool. If you don't, skip this step. If you do:
python -m scripts.package_skill <path/to/skill-folder>
Direct the user to the resulting
.skill file path so they can install it.
Claude.ai-specific instructions
In Claude.ai, the core workflow is the same (draft → test → review → improve → repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:
Running test cases: No subagents means no parallel execution. Read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself, one at a time. Skip baseline runs — just use the skill.
Reviewing results: If you can't open a browser, skip the browser reviewer. Present results directly in the conversation; ask for feedback inline.
Benchmarking: Skip quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents.
Description optimization: Requires the
claude CLI (claude -p), only available in Claude Code. Skip on Claude.ai.
Blind comparison: Requires subagents. Skip it.
Packaging: The
package_skill.py script works anywhere with Python and a filesystem.
Updating an existing skill:
- Preserve the original name. Use the existing directory name and
frontmatter field unchanged.name - Copy to a writeable location before editing. The installed skill path may be read-only. Copy to
, edit there, and package from the copy./tmp/skill-name/
Cowork-Specific Instructions
If you're in Cowork, the main things to know are:
- You have subagents, so the main workflow works. (If severe timeouts occur, run test prompts in series rather than parallel.)
- No browser/display — use
to write a standalone HTML file, then share the link.--static <output_path> - GENERATE THE EVAL VIEWER BEFORE evaluating inputs yourself. Whether in Cowork or Claude Code, always generate the viewer for the human to review before you revise.
- Feedback: the "Submit All Reviews" button downloads
to the browser's download folder. You may need to request access.feedback.json - Description optimization works fine since it uses
, not a browser. Save it until the skill is fully done.claude -p - Updating an existing skill: Follow the update guidance in the claude.ai section above.
Reference files
Read only the files relevant to your current step.
| File | Consult when |
|---|---|
| Spawning a grader subagent to evaluate expectations |
| Running blind A/B comparison between two outputs |
| Analyzing benchmark results or why one version beat another |
| Looking up exact JSON structure for evals.json, grading.json, benchmark.json |
| Drafting expectations — patterns by task type, anti-patterns, scripted grading |
| Common errors: server startup, claude -p failures, validation, path issues |
| Writing trigger eval queries; how skill triggering works |
| Generating review HTML (serve locally or write static with ) |
Note: This skill uses
agents/ for subagent prompt files and eval-viewer/ for the review UI, in addition to the standard scripts/, references/, and assets/ directories.