Awesome-omni-skill prompt-executor

Execute prompts from ./prompts/ directory with various AI models. Use when user asks to run a prompt, execute a task, delegate work to an AI model, run prompts in worktrees/tmux, or run prompts with verification loops.

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data-ai/prompt-executor" ~/.claude/skills/diegosouzapw-awesome-omni-skill-prompt-executor && rm -rf "$T"
manifest: skills/data-ai/prompt-executor/SKILL.md
source content

Prompt Executor

Auto-Approval Setup

If the user has to manually confirm the executor bash command, suggest they add this rule to

~/.claude/settings.json
under
permissions.allow
:

"Bash(PLUGIN_ROOT=$(jq -r '.plugins.\"daplug@cruzanstx\"[0].installPath' ~/.claude/plugins/installed_plugins.json):*)"

Quick command to add it:

# Add auto-approval rule for prompt executor
jq '.permissions.allow += ["Bash(PLUGIN_ROOT=$(jq -r '"'"'.plugins.\"daplug@cruzanstx\"[0].installPath'"'"' ~/.claude/plugins/installed_plugins.json):*)"]' ~/.claude/settings.json > /tmp/settings.json && mv /tmp/settings.json ~/.claude/settings.json

Execute prompts from

./prompts/
(including subfolders) using various AI models (Claude, Codex, Gemini, ZAI, etc).

When to Use This Skill

  • User says "run prompt 123" or "execute prompt 123"
  • User says "run that prompt with codex/gemini/zai"
  • User wants to "run a prompt in a worktree"
  • User wants to "run prompts in parallel"
  • User asks to "delegate this to codex/gemini"
  • User wants to "run with verification loop" or "keep retrying until complete"
  • User asks to "check loop status" for a running prompt

Executor Script

IMPORTANT: Get the executor path from Claude's installed plugins manifest:

PLUGIN_ROOT=$(jq -r '.plugins."daplug@cruzanstx"[0].installPath' ~/.claude/plugins/installed_plugins.json)
EXECUTOR="$PLUGIN_ROOT/skills/prompt-executor/scripts/executor.py"
python3 "$EXECUTOR" [prompts...] [options]

Options:

  • --model, -m
    : claude, cc-sonnet, cc-opus, codex, codex-spark, codex-high, codex-xhigh, gpt52, gpt52-high, gpt52-xhigh, gemini, gemini-high, gemini-xhigh, gemini25pro, gemini25flash, gemini25lite, gemini3flash, gemini3pro, gemini31pro, zai, glm5, opencode, local, qwen, devstral, glm-local, qwen-small
  • --cli
    : Override CLI wrapper (codex, opencode, or claude; aliases: claudecode, cc). Unsupported explicit combinations fail with a clear error (no silent fallback).
  • --variant
    : Reasoning variant override (
    none|low|medium|high|xhigh
    ). Explicit
    --variant
    overrides alias defaults (
    codex-high
    ,
    gpt52-high
    , etc.).
  • --cwd, -c
    : Working directory for execution
  • --run, -r
    : Actually run the CLI (default: just return info)
  • --info-only, -i
    : Only return prompt info, no CLI details
  • --worktree, -w
    : Create isolated git worktree for execution
  • --sandbox
    : Enable sandboxing (Linux default backend: bubblewrap)
  • --sandbox-type
    : Sandbox backend override (
    bubblewrap
    )
  • --no-sandbox
    : Explicitly disable sandboxing
  • --sandbox-profile
    : Isolation profile (
    strict|balanced|dev
    , default
    balanced
    )
  • --sandbox-workspace
    : Override sandbox workspace path (default: execution cwd)
  • --sandbox-net
    : Network override (
    on|off
    ; default comes from profile)
  • --base-branch, -b
    : Base branch for worktree (default: main)
  • --on-conflict
    : How to handle existing worktree (error|remove|reuse|increment)
  • --loop, -l
    : Enable iterative verification loop until completion
  • --max-iterations
    : Max loop iterations before giving up (default: 3)
  • --completion-marker
    : Text pattern signaling completion (default: VERIFICATION_COMPLETE)
  • --loop-status
    : Check status of an existing verification loop

Output: JSON with prompt content, CLI command, log path, worktree info, and loop state if enabled

Execution Flows

Direct Execution (default)

# Get executor path from installed plugins manifest
PLUGIN_ROOT=$(jq -r '.plugins."daplug@cruzanstx"[0].installPath' ~/.claude/plugins/installed_plugins.json)
EXECUTOR="$PLUGIN_ROOT/skills/prompt-executor/scripts/executor.py"

# Get prompt info
python3 "$EXECUTOR" 123 --model codex

# Force OpenCode path with reasoning variant
python3 "$EXECUTOR" 123 --model codex --cli opencode --variant high

# Folder-qualified prompt (resolves prompts/providers/011-*.md)
python3 "$EXECUTOR" providers/011 --model codex

# Run in current directory
python3 "$EXECUTOR" 123 --model codex --run

# Run in bubblewrap sandbox (Linux)
python3 "$EXECUTOR" 123 --model codex --run --sandbox

# Strict profile (no network by default)
python3 "$EXECUTOR" 123 --model codex --run --sandbox --sandbox-profile strict

# Explicit opt-out
python3 "$EXECUTOR" 123 --model codex --run --no-sandbox

With Worktree (built-in)

Single command creates worktree, copies TASK.md, and optionally runs:

# Create worktree and get info
python3 "$EXECUTOR" 123 --worktree --model codex

# Create worktree and run immediately
python3 "$EXECUTOR" 123 --worktree --model codex --run

# Use different base branch
python3 "$EXECUTOR" 123 --worktree --base-branch develop --model codex

The worktree directory is read from

worktree_dir
in
<daplug_config>
within CLAUDE.md (via config-reader), or defaults to
../worktrees/
.

With tmux (use tmux-manager skill)

  1. Get CLI command from executor:
python3 "$EXECUTOR" 123 --model codex
# Returns: {"cli_command": ["codex", "exec", "--full-auto"], "content": "...", "log": "..."}
  1. Create tmux session using tmux-manager patterns:
SESSION_NAME="prompt-123-$(date +%Y%m%d-%H%M%S)"
tmux new-session -d -s "$SESSION_NAME" -c "$WORKTREE_PATH"
  1. Send command to session:
tmux send-keys -t "$SESSION_NAME" "codex exec --full-auto '...' 2>&1 | tee $LOG_FILE" C-m

With Verification Loop

Run prompts with automatic retries until the task is verified complete:

# Run with verification loop (background, default 3 iterations)
python3 "$EXECUTOR" 123 --model codex --run --loop

# With custom max iterations
python3 "$EXECUTOR" 123 --model codex --run --loop --max-iterations 5

# With custom completion marker
python3 "$EXECUTOR" 123 --model codex --run --loop --completion-marker "TASK_DONE"

# Worktree + loop combo
python3 "$EXECUTOR" 123 --model codex --worktree --run --loop

Output includes:

{
  "execution": {
    "status": "loop_running",
    "pid": 12345,
    "loop_log": "~/.claude/cli-logs/codex-123-loop-20251229-120000.log",
    "state_file": "~/.claude/loop-state/123.json",
    "max_iterations": 3,
    "completion_marker": "VERIFICATION_COMPLETE"
  }
}

Log paths follow

cli_logs_dir
from
<daplug_config>
if configured (default
~/.claude/cli-logs/
).

Completion markers (required):

  • To end the loop, the model must output a final-line verification tag:
    <verification>VERIFICATION_COMPLETE</verification>
    .
  • To request another iteration, output:
    <verification>NEEDS_RETRY: [reason]</verification>
    .
  • The executor ignores any markers that appear inside echoed prompt instructions (some CLIs print the full prompt into logs).

Check Loop Status

# Check specific prompt's loop
python3 "$EXECUTOR" 123 --loop-status

# List all active loops
python3 "$EXECUTOR" --loop-status

Model Reference

ModelCLIDescription
claude(Task subagent)Claude Sonnet via subagent
codexcodex exec --full-autoOpenAI Codex (gpt-5.3-codex)
codex-highcodex exec --full-auto -c model_reasoning_effort="high"Codex alias with default
--variant high
codex-xhighcodex exec --full-auto -c model_reasoning_effort="xhigh"Codex alias with default
--variant xhigh
gpt52codex exec --full-auto -m gpt-5.2GPT-5.2 for planning/research
gpt52-highcodex exec --full-auto -m gpt-5.2 -c model_reasoning_effort="high"GPT-5.2 alias with default
--variant high
gpt52-xhighcodex exec --full-auto -m gpt-5.2 -c model_reasoning_effort="xhigh"GPT-5.2 alias with default
--variant xhigh
geminigemini -y -m gemini-3-flash-previewGoogle Gemini 3 Flash Preview (default)
gemini-highgemini -y -m gemini-2.5-proGoogle Gemini 2.5 Pro (stable)
gemini-xhighgemini -y -m gemini-3-pro-previewGoogle Gemini 3 Pro Preview
gemini25progemini -y -m gemini-2.5-proGoogle Gemini 2.5 Pro (explicit alias)
gemini25flashgemini -y -m gemini-2.5-flashGoogle Gemini 2.5 Flash
gemini25litegemini -y -m gemini-2.5-flash-liteGoogle Gemini 2.5 Flash-Lite
gemini3flashgemini -y -m gemini-3-flash-previewGoogle Gemini 3 Flash Preview (explicit alias)
gemini3progemini -y -m gemini-3-pro-previewGoogle Gemini 3 Pro Preview (explicit alias)
gemini31progemini -y -m gemini-3.1-pro-previewGemini 3.1 Pro Preview (if your account has access)
zaicodex exec --profile zaiZ.AI GLM-4.7 (via Codex, may have issues)
opencodeopencode run --format json -m zai/glm-4.7Z.AI GLM-4.7 (via OpenCode, recommended; JSON output)
local/qwenopencode run --format json -m lmstudio/qwen3-coder-nextLocal qwen-coder model (default: opencode)
devstralopencode run --format json -m lmstudio/devstral-small-2-2512Local devstral model (default: opencode)

OpenCode runs include

--variant <value>
when a variant is set.

OpenCode permissions (headless runs): configure

~/.config/opencode/opencode.json
to avoid interactive permission prompts, e.g.:

{
  "permission": {
    "*": "allow",
    "external_directory": "allow",
    "doom_loop": "allow"
  }
}

Output Display

After executing the prompt, display a clear summary that includes the prompt title from the JSON output:

## Execution Started

**Prompt 295**: Add transcript success monitoring with retry logic

| Field | Value |
|-------|-------|
| Model | codex (gpt-5.3-codex) |
| Status | 🟢 Running (PID 12345) |
| Loop | Max 3 iterations |

Worktree: `.worktrees/repo-prompt-295-20251229-181852/`
Branch: `prompt/295-transcript-success-monitoring`

Important: Always include the

title
field from the executor JSON output. This tells the user what the prompt actually does, not just its number.

Monitoring Pattern

After launching, spawn a haiku monitor subagent:

Task(
  subagent_type: "general-purpose",
  model: "haiku",
  run_in_background: true,
  prompt: """
    Monitor prompt execution:
    - Log file: {log_path}
    - PID: {pid}
    - {If tmux: Session: {session}}
    - {If worktree: Worktree: {worktree_path}}

    IMPORTANT: Use Bash tool for all file operations (not Read tool):

    Every 30 seconds, check status using Bash:
    ```bash
    # Check if process is running
    ps -p {pid} > /dev/null 2>&1 && echo "RUNNING" || echo "STOPPED"

    # Tail last 20 lines of log
    tail -20 "{log_path}"
    ```

    On completion (process ended):
    ```bash
    # Get summary from log
    tail -50 "{log_path}"

    # If worktree, show git status
    cd "{worktree_path}" && git log --oneline -5 && git diff --stat
    ```
    - Summarize what was done
    - Report final status
  """
)

Cleanup

For worktree executions, after completion:

# Remove TASK.md before merge
rm "$WORKTREE_PATH/TASK.md"

# Merge if requested
git checkout main
git merge --no-ff "$BRANCH_NAME" -m "Merge prompt: $BRANCH_NAME"

# Cleanup
git worktree remove "$WORKTREE_PATH"
git branch -D "$BRANCH_NAME"
git worktree prune