Awesome-omni-skills prompt-engineer
prompt-engineer workflow skill. Use this skill when the user needs Transforms user prompts into optimized prompts using frameworks (RTF, RISEN, Chain of Thought, RODES, Chain of Density, RACE, RISE, STAR, SOAP, CLEAR, GROW) and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.
git clone https://github.com/diegosouzapw/awesome-omni-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/prompt-engineer" ~/.claude/skills/diegosouzapw-awesome-omni-skills-prompt-engineer && rm -rf "$T"
skills/prompt-engineer/SKILL.mdprompt-engineer
Overview
This public intake copy packages
plugins/antigravity-awesome-skills-claude/skills/prompt-engineer from https://github.com/sickn33/antigravity-awesome-skills into the native Omni Skills editorial shape without hiding its origin.
Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.
This intake keeps the copied upstream files intact and uses
metadata.json plus ORIGIN.md as the provenance anchor for review.
Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Purpose, Notes, Limitations.
When to Use This Skill
Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.
- User provides a vague or generic prompt (e.g., "help me code Python")
- User has a complex idea but struggles to articulate it clearly
- User's prompt lacks structure, context, or specific requirements
- Task requires step-by-step reasoning (debugging, analysis, design)
- User needs a prompt for a specific AI task but doesn't know prompting frameworks
- User wants to improve an existing prompt's effectiveness
Operating Table
| Situation | Start here | Why it matters |
|---|---|---|
| First-time use | | Confirms repository, branch, commit, and imported path before touching the copied workflow |
| Provenance review | | Gives reviewers a plain-language audit trail for the imported source |
| Workflow execution | | Starts with the smallest copied file that materially changes execution |
| Supporting context | | Adds the next most relevant copied source file without loading the entire package |
| Handoff decision | | Helps the operator switch to a stronger native skill when the task drifts |
Workflow
This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.
- Read the raw prompt provided by the user
- Detect task characteristics:
- Type: coding, writing, analysis, design, learning, planning, decision-making, creative, etc.
- Complexity: simple (one-step), moderate (multi-step), complex (requires reasoning/design)
- Clarity: clear intention vs. ambiguous/vague
- Domain: technical, business, creative, academic, personal, etc.
- Identify implicit requirements:
Imported Workflow Notes
Imported: Workflow
Step 1: Analyze Intent
Objective: Understand what the user truly wants to accomplish.
Actions:
- Read the raw prompt provided by the user
- Detect task characteristics:
- Type: coding, writing, analysis, design, learning, planning, decision-making, creative, etc.
- Complexity: simple (one-step), moderate (multi-step), complex (requires reasoning/design)
- Clarity: clear intention vs. ambiguous/vague
- Domain: technical, business, creative, academic, personal, etc.
- Identify implicit requirements:
- Does user need examples?
- Is output format specified?
- Are there constraints (time, resources, scope)?
- Is this exploratory or execution-focused?
Detection Patterns:
- Simple tasks: Short prompts (<50 chars), single verb, no context
- Complex tasks: Long prompts (>200 chars), multiple requirements, conditional logic
- Ambiguous tasks: Generic verbs ("help", "improve"), missing object/context
- Structured tasks: Mentions steps, phases, deliverables, stakeholders
Step 3: Select Framework(s)
Objective: Map task characteristics to optimal prompting framework(s).
Framework Mapping Logic:
| Task Type | Recommended Framework(s) | Rationale |
|---|---|---|
| Role-based tasks (act as expert, consultant) | RTF (Role-Task-Format) | Clear role definition + task + output format |
| Step-by-step reasoning (debugging, proof, logic) | Chain of Thought | Encourages explicit reasoning steps |
| Structured projects (multi-phase, deliverables) | RISEN (Role, Instructions, Steps, End goal, Narrowing) | Comprehensive structure for complex work |
| Complex design/analysis (systems, architecture) | RODES (Role, Objective, Details, Examples, Sense check) | Balances detail with validation |
| Summarization (compress, synthesize) | Chain of Density | Iterative refinement to essential info |
| Communication (reports, presentations, storytelling) | RACE (Role, Audience, Context, Expectation) | Audience-aware messaging |
| Investigation/analysis (research, diagnosis) | RISE (Research, Investigate, Synthesize, Evaluate) | Systematic analytical approach |
| Contextual situations (problem-solving with background) | STAR (Situation, Task, Action, Result) | Context-rich problem framing |
| Documentation (medical, technical, records) | SOAP (Subjective, Objective, Assessment, Plan) | Structured information capture |
| Goal-setting (OKRs, objectives, targets) | CLEAR (Collaborative, Limited, Emotional, Appreciable, Refinable) | Goal clarity and actionability |
| Coaching/development (mentoring, growth) | GROW (Goal, Reality, Options, Will) | Developmental conversation structure |
Blending Strategy:
- Combine 2-3 frameworks when task spans multiple types
- Example: Complex technical project → RODES + Chain of Thought (structure + reasoning)
- Example: Leadership decision → CLEAR + GROW (goal clarity + development)
Selection Criteria:
- Primary framework = best match to core task type
- Secondary framework(s) = address additional complexity dimensions
- Avoid over-engineering: simple tasks get simple frameworks
Critical Rule: This selection happens silently - do not explain framework choice to user.
Role: You are a senior software architect. [RTF - Role]
Objective: Design a microservices architecture for [system]. [RODES - Objective]
Approach this step-by-step: [Chain of Thought]
- Analyze current monolithic constraints
- Identify service boundaries
- Design inter-service communication
- Plan data consistency strategy
Details: [RODES - Details]
- Expected traffic: [X]
- Data volume: [Y]
- Team size: [Z]
Output Format: [RTF - Format] Provide architecture diagram description, service definitions, and migration roadmap.
Sense Check: [RODES - Sense check] Validate that services are loosely coupled, independently deployable, and aligned with business domains.
**4.5. Language Adaptation** - If original prompt is in Portuguese, generate prompt in Portuguese - If original prompt is in English, generate prompt in English - If mixed, default to English (more universal for AI models) **4.6. Quality Checks** Before finalizing, verify: - [ ] Prompt is self-contained (no external context needed) - [ ] Task is specific and measurable - [ ] Output format is clear - [ ] No ambiguous language - [ ] Appropriate level of detail for task complexity #### Imported: Purpose This skill transforms raw, unstructured user prompts into highly optimized prompts using established prompting frameworks. It analyzes user intent, identifies task complexity, and intelligently selects the most appropriate framework(s) to maximize Claude/ChatGPT output quality. The skill operates in "magic mode" - it works silently behind the scenes, only interacting with users when clarification is critically needed. Users receive polished, ready-to-use prompts without technical explanations or framework jargon. This is a **universal skill** that works in any terminal context, not limited to Obsidian vaults or specific project structures. ## Examples ### Example 1: Ask for the upstream workflow directly ```text Use @prompt-engineer to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.
Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.
Example 2: Ask for a provenance-grounded review
Review @prompt-engineer against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.
Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.
Example 3: Narrow the copied support files before execution
Use @prompt-engineer for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.
Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.
Example 4: Build a reviewer packet
Review @prompt-engineer using the copied upstream files plus provenance, then summarize any gaps before merge.
Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.
Best Practices
Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.
- ❌ Assume information that wasn't provided - ALWAYS ask if critical details are missing
- ❌ Explain which framework was selected or why (magic mode - keep it invisible)
- ❌ Generate generic, one-size-fits-all prompts - always customize to context
- ❌ Use technical jargon in the final prompt (unless user's domain is technical)
- ❌ Ask more than 3 clarifying questions (avoid user fatigue)
- ❌ Include meta-commentary in the output ("This prompt uses...", "Note that...")
- ❌ Present output without code block formatting
Imported Operating Notes
Imported: Critical Rules
NEVER:
- ❌ Assume information that wasn't provided - ALWAYS ask if critical details are missing
- ❌ Explain which framework was selected or why (magic mode - keep it invisible)
- ❌ Generate generic, one-size-fits-all prompts - always customize to context
- ❌ Use technical jargon in the final prompt (unless user's domain is technical)
- ❌ Ask more than 3 clarifying questions (avoid user fatigue)
- ❌ Include meta-commentary in the output ("This prompt uses...", "Note that...")
- ❌ Present output without code block formatting
- ❌ Mix languages inconsistently (if user writes in PT, respond in PT)
ALWAYS:
- ✅ Analyze intent before generating (Step 1 is mandatory)
- ✅ Ask clarifying questions if critical information is ambiguous (Step 2 conditional)
- ✅ Select framework(s) based on task type and complexity (Step 3 mapping)
- ✅ Blend multiple frameworks when it improves prompt quality
- ✅ Adapt prompt length to original input complexity (simple → short, complex → detailed)
- ✅ Include output format specification in generated prompts
- ✅ Present final prompt in clean Markdown code block
- ✅ Make prompts self-contained (no dependency on external context)
- ✅ Use examples in complex prompts to illustrate expected output
- ✅ Validate prompt completeness before presenting (quality checks in Step 4.6)
Example 2: Complex Clear Prompt → RODES + Chain of Thought (No Clarification)
User Input:
copilot> create prompt for: I need to build a product recommendation system considering purchase history, user preferences, and market trends
Skill Internal Process:
- Analyze: Clear intent, complex task (ML system design), technical domain
- Clarify: Skip (sufficient information)
- Select: RODES (complex design) + Chain of Thought (reasoning needed)
Skill Output:
``` Role: You are a senior machine learning engineer specializing in recommendation systems. Objective: Design a product recommendation system that combines user purchase history, preferences, and market trends. Details: - Input data sources: user transaction logs, preference surveys, market trend APIs - Algorithm considerations: collaborative filtering, content-based filtering, hybrid approaches - Performance metrics: precision, recall, diversity, novelty - Scalability: system must handle 100k+ users - Real-time constraints: recommendations updated within 500ms Approach step-by-step: 1. Analyze the three data sources and their relationships 2. Propose 2-3 algorithm approaches with trade-offs analysis 3. Design data pipeline architecture (ingestion, processing, serving) 4. Define evaluation metrics and A/B testing strategy 5. Outline implementation roadmap with phases Examples: - Show pseudocode for core recommendation logic - Illustrate data flow diagram - Provide sample API response structure Sense Check: Validate that the proposed solution balances personalization with discovery, scales to production load, and incorporates real-time market trends effectively. Output format: Technical design document with architecture diagrams, algorithm justification, and implementation roadmap. ```
Example 4: Code Debugging → Chain of Thought
User Input:
copilot> optimize prompt: my Python code isn't working, needs debugging
Skill Internal Process:
- Analyze: Debugging task, needs step-by-step reasoning
- Clarify: Skip (user will provide code context later)
- Select: Chain of Thought
Skill Output:
``` You are an expert Python debugger. Task: Debug the Python code provided by the user by analyzing it step-by-step. Debugging approach: 1. Read and understand the code's intended purpose 2. Identify the error or unexpected behavior 3. Trace variable states at each execution step 4. Locate the root cause of the issue 5. Propose fix with explanation 6. Suggest preventive measures to avoid similar bugs For each step, show your reasoning: - What you're checking - What you found - Why it matters Output format: - **Issue identified:** [Describe the bug] - **Root cause:** [Why it's happening] - **Fix:** [Corrected code with comments] - **Prevention:** [Best practices to avoid this in the future] Include a working example to verify the fix. ```
Troubleshooting
Problem: The operator skipped the imported context and answered too generically
Symptoms: The result ignores the upstream workflow in
plugins/antigravity-awesome-skills-claude/skills/prompt-engineer, fails to mention provenance, or does not use any copied source files at all.
Solution: Re-open metadata.json, ORIGIN.md, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.
Problem: The imported workflow feels incomplete during review
Symptoms: Reviewers can see the generated
SKILL.md, but they cannot quickly tell which references, examples, or scripts matter for the current task.
Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.
Problem: The task drifted into a different specialization
Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.
Related Skills
- Use when the work is better handled by that native specialization after this imported skill establishes context.@prompt-engineering
- Use when the work is better handled by that native specialization after this imported skill establishes context.@prompt-engineering-patterns
- Use when the work is better handled by that native specialization after this imported skill establishes context.@prompt-library
- Use when the work is better handled by that native specialization after this imported skill establishes context.@protect-mcp-governance
Additional Resources
Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.
| Resource family | What it gives the reviewer | Example path |
|---|---|---|
| copied reference notes, guides, or background material from upstream | |
| worked examples or reusable prompts copied from upstream | |
| upstream helper scripts that change execution or validation | |
| routing or delegation notes that are genuinely part of the imported package | |
| supporting assets or schemas copied from the source package | |
Imported Reference Notes
Imported: Notes
This skill is platform-agnostic and works in any terminal context where GitHub Copilot CLI is available. It does not depend on:
- Obsidian vault structure
- Specific project configurations
- External files or templates
The skill is entirely self-contained, operating purely on user input and framework knowledge.
Imported: Limitations
- Use this skill only when the task clearly matches the scope described above.
- Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
- Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.