Learn-skills.dev prompt-engineer
Expert in designing, optimizing, and evaluating prompts for Large Language Models. Specializes in Chain-of-Thought, ReAct, few-shot learning, and production prompt management. Use when crafting prompts, optimizing LLM outputs, or building prompt systems. Triggers include "prompt engineering", "prompt optimization", "chain of thought", "few-shot", "prompt template", "LLM prompting".
install
source · Clone the upstream repo
git clone https://github.com/NeverSight/learn-skills.dev
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/NeverSight/learn-skills.dev "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/skills-md/404kidwiz/claude-supercode-skills/prompt-engineer" ~/.claude/skills/neversight-learn-skills-dev-prompt-engineer && rm -rf "$T"
manifest:
data/skills-md/404kidwiz/claude-supercode-skills/prompt-engineer/SKILL.mdsource content
Prompt Engineer
Purpose
Provides expertise in designing, optimizing, and evaluating prompts for Large Language Models. Specializes in prompting techniques like Chain-of-Thought, ReAct, and few-shot learning, as well as production prompt management and evaluation.
When to Use
- Designing prompts for LLM applications
- Optimizing prompt performance
- Implementing Chain-of-Thought reasoning
- Creating few-shot examples
- Building prompt templates
- Evaluating prompt effectiveness
- Managing prompts in production
- Reducing hallucinations through prompting
Quick Start
Invoke this skill when:
- Crafting prompts for LLM applications
- Optimizing existing prompts
- Implementing advanced prompting techniques
- Building prompt management systems
- Evaluating prompt quality
Do NOT invoke when:
- LLM system architecture → use
/llm-architect - RAG implementation → use
/ai-engineer - NLP model training → use
/nlp-engineer - Agent performance monitoring → use
/performance-monitor
Decision Framework
Prompting Technique? ├── Reasoning Tasks │ ├── Step-by-step → Chain-of-Thought │ └── Tool use → ReAct ├── Classification/Extraction │ ├── Clear categories → Zero-shot + examples │ └── Complex → Few-shot with edge cases ├── Generation │ └── Structured output → JSON mode + schema └── Consistency └── System prompt + temperature tuning
Core Workflows
1. Prompt Design
- Define task clearly
- Choose prompting technique
- Write system prompt with context
- Add examples if few-shot
- Specify output format
- Test with diverse inputs
2. Chain-of-Thought Implementation
- Identify reasoning requirements
- Add "Let's think step by step" or equivalent
- Provide reasoning examples
- Structure expected reasoning steps
- Test reasoning quality
- Iterate on step guidance
3. Prompt Optimization
- Establish baseline metrics
- Identify failure patterns
- Adjust instructions for clarity
- Add/modify examples
- Tune output constraints
- Measure improvement
Best Practices
- Be specific and explicit in instructions
- Use structured output formats (JSON, XML)
- Include examples for complex tasks
- Test with edge cases and adversarial inputs
- Version control prompts
- Measure and track prompt performance
Anti-Patterns
| Anti-Pattern | Problem | Correct Approach |
|---|---|---|
| Vague instructions | Inconsistent output | Be specific and explicit |
| No examples | Poor performance on complex tasks | Add few-shot examples |
| Unstructured output | Hard to parse | Specify format clearly |
| No testing | Unknown failure modes | Test diverse inputs |
| Prompt in code | Hard to iterate | Separate prompt management |