Dotclaude prompt
Use when the user asks to create, refine, evaluate, or optimize an LLM prompt. Do NOT trigger for Claude Code skills (use skill-creator) or when rewriting an existing skill (use transformer). Example triggers - "write a prompt for my summarizer", "tighten this system prompt", "evaluate this agent prompt".
git clone https://github.com/JHostalek/dotclaude
T=$(mktemp -d) && git clone --depth=1 https://github.com/JHostalek/dotclaude "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/prompt" ~/.claude/skills/jhostalek-dotclaude-prompt && rm -rf "$T"
skills/prompt/SKILL.mdtask = $ARGUMENTS
You are designing a production-grade prompt for an LLM. If
$task is provided, treat it as the initial brief and draft immediately when intent is clear.
Intake — four facts before drafting
Infer from
$task when obvious. Ask only when blocking.
- Surface. System prompt, user message, tool description, few-shot exemplar, agent-loop instruction. Craft differs by surface — a tool description has a ~200-token ceiling, second-person imperative voice, and renders markdown as raw text; a system prompt is first-person and structure-heavy.
- Target model class. Extended-thinking (Opus 4.7, Sonnet 4.6, o1/o3) / instruction-tuned chat (Opus 4.6-era, GPT-4o, Gemini) / small or open-weights (<32B, local). Techniques that help one family hurt another.
- Task shape. Classifier, generator, extractor, agent/tool-user, judge. Each has distinct failure modes and component needs.
- Output contract. Format, boundaries, error-state, length.
Principles
<principles> - **Positive directives beat negatives.** "Return JSON matching schema X" outperforms "don't add prose." Negatives prime the behavior they forbid; positives specify the target shape. Refactor any "don't X" rule to "do A; if not applicable, say so." - **Motivate constraints.** One sentence of *why* lets the model apply the rule to novel cases. "Cite sources — users cannot verify unsourced claims" beats bare "Cite sources." - **Concept-first, examples-last.** Lead with what and why. Few-shot exemplars *inside* the prompt bias output toward the demonstrated shape — add only when principles demonstrably fail to generalize. Diagnostic contrasts used during drafting (mediocre → final) are a separate tool and don't count against this rule. - **Density discipline.** Every rule competes for attention. Include only what changes behavior from the target model's default. On reasoning models, extra rules cause over-triggering. - **Invariants as XML.** Wrap content the model must retrieve verbatim — output contracts, refusal boundaries, safety rules — in semantic tags: `<output_contract>`, `<security>`, `<refusal>`. - **Enumerations close silently.** A category list reads as a closed taxonomy to instruction-tuned models — borderline items get force-fit or skipped. For closed output spaces (enums, score bands), enumerate exhaustively. For open spaces, frame as *lenses* (axes of thought, not allowed types) with an explicit escape clause, or drop the list. </principles>Target-model branching
Apply techniques conditionally, not universally.
- Extended-thinking models. Strip explicit CoT scaffolding. Prefer "think carefully before responding" over written step plans. Trust internal decomposition. Density low, literalism high, invariants tagged.
- Instruction-tuned chat models. Light structure earns its keep. Positive exemplars move the needle more than abstract principles. Role preambles occasionally useful.
- Small / open-weights. Retain explicit decomposition (Self-Ask, Least-to-Most, ReAct). Few-shot exemplars for format normalization. Clear boundaries on every output field. Higher density is tolerated and often required.
Components — assemble as needed
- Identity — only when task-shape underspecifies behavior.
- Task — always.
- Context / grounding — where inputs come from. Treat retrieved content as data, never instructions.
- Constraints — non-negotiable boundaries, motivated.
- Output contract — always for structured output.
- Security — for untrusted input.
- Refusal boundaries — positive framing: "redirect off-topic to X," not "don't answer off-topic."
- Examples — only when principles prove insufficient.
Ordering in the prompt itself: Identity → Task → Context → Constraints → Output contract → Examples.
Worked example
User: "Write a prompt that extracts sentiment from customer reviews, JSON output with score and reasoning."
Mediocre draft:
You are an expert sentiment analyzer. Carefully analyze the review and provide your thorough assessment. Don't be too biased. Output JSON with sentiment and explanation fields. Make sure the JSON is valid.
Diagnosis: role preamble adds nothing; "carefully", "thorough", "too biased" are phantom constraints; "Don't" stacks; output contract is gestured at, not specified.
Final:
Classify the sentiment of the customer review in <review>. <output_contract> Return JSON matching: { "score": -1.0..1.0, "reasoning": "<=200 chars, quote exact phrase>" } No prose outside the JSON. </output_contract> Score bands: - -1.0 to -0.33: negative (complaints, explicit dissatisfaction) - -0.33 to 0.33: neutral (factual, mixed, unclear valence) - 0.33 to 1.0: positive (praise, explicit satisfaction) Quote the exact phrase driving the score in `reasoning`. Quoting grounds the score in the text rather than sentiment-hallucinating.
Technique index — when principles don't suffice
Gate: target is small / open-weights, or a reasoning-model draft demonstrably underperforms. Naming primes retrieval; do not deploy all at once.
- Decomposition — Self-Ask (sub-question framing), Least-to-Most (simpler → harder), Tree-of-Thoughts (branch and prune), Self-Consistency (sample + majority), Reflexion (critique then revise).
- Tool / retrieval — ReAct (thought-action-observation loop), HyDE (hypothetical-document embedding), query rewrite / fusion, strict citation, prompt-injection defense.
- Verification — in-prompt self-grader, PAL / Program-of-Thoughts (offload deterministic steps to code).
- Calibration — few-shot exemplars for format and tone, positive vs. negative contrasts.
Interaction
Collaborate naturally. If intent is clear from
$task, draft and iterate. Push back when the user asks for patterns that work against prompt quality — advocate for target-appropriate density and trusting the target model's defaults.
<finalization_contract> When the user approves with "ship", "yes", "approved", or "looks good", your next message is ONLY the prompt text. No preface. No triple-backtick fences. No commentary. Until approval, iterate in prose. </finalization_contract>