Ai-coding-project-boilerplate skill-optimization

Evaluates and optimizes skill file quality using 8 content patterns and 9 editing principles. Use when creating skills, refining skill content, or auditing skill quality.

install
source · Clone the upstream repo
git clone https://github.com/shinpr/ai-coding-project-boilerplate
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/shinpr/ai-coding-project-boilerplate "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills-en/skill-optimization" ~/.claude/skills/shinpr-ai-coding-project-boilerplate-skill-optimization && rm -rf "$T"
manifest: .claude/skills-en/skill-optimization/SKILL.md
source content

Skill Content Optimization

Core Philosophy

  1. Evidence-Based: Grounded in prompt engineering research, applied to skill authoring
  2. Concrete: Each pattern provides detection criteria and transform methods
  3. Structure-Focused: Optimizes expression and organization; domain knowledge remains unchanged

Content Optimization Patterns

P1: Critical (Must Fix)

Issues that directly reduce LLM execution accuracy when consuming the skill.

BP-001: Negative Instructions → Positive Form

DetectionTransform
"don't", "do not", "never", "avoid" in skill instructionsReframe as positive directive with equivalent constraint. Exception: Negative form is permitted only when ALL 4 conditions are met: (1) violation destroys state in a single step, (2) caller or subsequent steps cannot normally recover, (3) the constraint is operational/procedural, not a quality policy or role boundary, (4) positive rewording would expand or blur the target scope. If any condition is not met, rewrite in positive form.

Exception boundary examples:

  • Permitted: "Do not modify the command", "Do not add flags", "Do not execute destructive operations"
  • Rewrite in positive form: "Do not invent issues" → "Base every issue on BP patterns or 9 principles", "Do not skip P1 issues" → "Evaluate all P1 issues in every review mode", "Do not give grade A when P1 exists" → "Assign grade A only when P1 count is zero"

Quality policies, role boundaries, scoring criteria, and general work rules always use positive form. Outputs that the caller validates, overwrites, or discards are never irreversible.

Skill example:

  • Before: "Don't use generic variable names"
  • After: "Use descriptive variable names that reflect purpose (e.g.,
    userId
    not
    x
    )"

Why critical for skills: LLM attention mechanisms focus on negated content. Skill instructions with "don't" increase probability of the forbidden behavior.

BP-002: Vague Instructions → Specific Criteria

DetectionTransform
"appropriate", "good", "proper", "best", "should be clear"Replace with measurable if-then criteria or concrete thresholds. Skill exception: Expressions that the LLM can resolve unambiguously from input context (e.g., "where the user left gaps" when the user's prompt is available for comparison) are not vague — they describe a deterministic operation, not a subjective judgment.
Missing output format, scope, or success criteriaAdd explicit constraints

Skill example:

  • Before: "Handle errors appropriately"
  • After: "Error handling criteria: 1. try-catch for external API calls, file I/O, JSON.parse 2. Log: error.name, error.stack, timestamp 3. Re-throw with context if caller needs to handle"

Why critical for skills: Accounts for ~40% of execution variance. Every vague instruction forces LLM to guess.

BP-003: Missing Output Format → Structured Output

DetectionTransform
Skill describes what to do but not the expected deliverable formatAdd output section with structure, fields, and example

Skill example:

  • Before: "Analyze the code for issues"
  • After: "Output format:
    ## Issues Found
    with table: | Severity | Location | Description | Suggested Fix |"

Why critical for skills: Structured output constraints reduce hallucination and make skill results consistent.

P2: High Impact (Should Fix)

Issues that reduce skill effectiveness when addressed.

BP-004: Unstructured Content → Organized Format

DetectionTransform
Wall of text without headingsApply standard section order (see below)
Multiple topics mixed in one sectionSplit into distinct headed sections
No tables for reference dataConvert lists of criteria/patterns to tables

Standard skill section order:

  1. Context/Prerequisites
  2. Core concepts (definitions, patterns)
  3. Process/Methodology (step-by-step)
  4. Output format/Examples
  5. Quality checklist
  6. References

Conditional: Skip restructuring if skill is under 30 lines and covers a single topic.

BP-005: Missing Context → Explicit Prerequisites

DetectionTransform
Skill assumes knowledge not statedAdd Prerequisites section listing required context
Domain terms used without definitionAdd definitions inline or in a glossary table. Skill exception: Terms within the LLM's baseline knowledge (widely-used technical terminology, standard domain vocabulary) require no definition. Only project-specific terms, internal naming conventions, or domain jargon outside common LLM training data need explicit definition.
No "when to use" guidanceAdd trigger conditions with concrete scenarios

Skill example:

  • Before: "Apply the strangler pattern for migration"
  • After: "Prerequisite: Existing monolith with identifiable module boundaries. When to use: Replacing legacy module while maintaining production traffic."

BP-006: Complex Content → Decomposed Steps

DetectionTransform
3+ objectives in one instructionBreak into numbered steps with checkpoints
Sequential dependencies not explicitAdd dependency markers between steps
No intermediate verificationInsert checkpoint after each step

Conditional: Skip decomposition for simple reference tables or single-criteria rules.

Key insight: Goal is evaluable granularity with quality checkpoints, not decomposition for its own sake.

P3: Enhancement (Could Fix)

Incremental improvements for specific contexts.

BP-007: Biased Examples → Diverse Coverage

DetectionTransform
All examples share same pattern/structureAdd edge cases and exceptions
Only happy-path examplesAdd error cases, boundary conditions
Examples all same complexityInclude simple, moderate, and complex

BP-008: No Uncertainty Permission → Explicit Escalation

DetectionTransform
Skill demands definitive answers alwaysAdd escalation criteria for ambiguous cases
No "when to stop" guidanceAdd explicit stopping conditions

Skill example:

  • Before: "Determine the root cause"
  • After: "Determine the root cause. If root cause is uncertain after 3 investigation cycles, report top 3 hypotheses with confidence levels and evidence for each."

9 Skill Editing Principles

Measurable quality criteria for skill content. Each principle includes a pass/fail test.

#PrinciplePass CriteriaFail Example
1Context efficiencyEvery sentence contributes to LLM decision-making. No filler."This is an important skill that helps with..."
2DeduplicationNo concept explained twice at the same abstraction level within the skill or across skills. Mentions at different structural roles (e.g., classification framework vs execution detail) are not duplicates, provided the re-mention adds new constraints or criteriaSame error handling rules restated at the same abstraction level in multiple related skills
3GroupingRelated criteria in single section (minimize read operations)Scattered error handling rules across 4 sections
4MeasurabilityAll criteria use if-then format or concrete thresholds"Write clean code" without definition of clean
5Positive formInstructions state what to do (BP-001 applied)"Don't use any" instead of "Use only X"
6Consistent notationUniform heading levels, list styles, table formatsMix of
-
,
*
,
1.
in same context
7Explicit prerequisitesAll assumed knowledge statedUses "DI" without defining Dependency Injection
8Priority orderingMost important items first, exceptions lastEdge cases before common patterns
9Scope boundariesExplicit coverage: what this skill addresses vs references to other skillsOverlapping guidance with no cross-reference

References