Claude-skill-registry examples-auto-run
Run python examples in auto mode with logging, rerun helpers, and background control.
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/examples-auto-run" ~/.claude/skills/majiayu000-claude-skill-registry-examples-auto-run-788732 && rm -rf "$T"
manifest:
skills/data/examples-auto-run/SKILL.mdsource content
examples-auto-run
What it does
- Runs
with:uv run examples/run_examples.py
(auto-input/auto-approve).EXAMPLES_INTERACTIVE_MODE=auto- Per-example logs under
..tmp/examples-start-logs/ - Main summary log path passed via
(also under--main-log
)..tmp/examples-start-logs/ - Generates a rerun list of failures at
when.tmp/examples-rerun.txt
is set.--write-rerun
- Provides start/stop/status/logs/tail/collect/rerun helpers via
.run.sh - Background option keeps the process running with a pidfile;
cleans it up.stop
Usage
# Start (auto mode; interactive included by default) .codex/skills/examples-auto-run/scripts/run.sh start [extra args to run_examples.py] # Examples: .codex/skills/examples-auto-run/scripts/run.sh start --filter basic .codex/skills/examples-auto-run/scripts/run.sh start --include-server --include-audio # Check status .codex/skills/examples-auto-run/scripts/run.sh status # Stop running job .codex/skills/examples-auto-run/scripts/run.sh stop # List logs .codex/skills/examples-auto-run/scripts/run.sh logs # Tail latest log (or specify one) .codex/skills/examples-auto-run/scripts/run.sh tail .codex/skills/examples-auto-run/scripts/run.sh tail main_20260113-123000.log # Collect rerun list from a main log (defaults to latest main_*.log) .codex/skills/examples-auto-run/scripts/run.sh collect # Rerun only failed entries from rerun file (auto mode) .codex/skills/examples-auto-run/scripts/run.sh rerun
Defaults (overridable via env)
EXAMPLES_INTERACTIVE_MODE=autoEXAMPLES_INCLUDE_INTERACTIVE=1EXAMPLES_INCLUDE_SERVER=0EXAMPLES_INCLUDE_AUDIO=0EXAMPLES_INCLUDE_EXTERNAL=0- Auto-approvals in auto mode:
,APPLY_PATCH_AUTO_APPROVE=1
,SHELL_AUTO_APPROVE=1AUTO_APPROVE_MCP=1
Log locations
- Main logs:
.tmp/examples-start-logs/main_*.log - Per-example logs (from
):run_examples.py.tmp/examples-start-logs/<module_path>.log - Rerun list:
.tmp/examples-rerun.txt - Stdout logs:
.tmp/examples-start-logs/stdout_*.log
Notes
- The runner delegates to
, which already writes per-example logs and supportsuv run examples/run_examples.py
,--collect
, and--rerun-file
.--print-auto-skip
usesstart
so failures are captured automatically.--write-rerun- If
exists and is non-empty, invoking the skill with no args runs.tmp/examples-rerun.txt
by default.rerun
Behavioral validation (Codex/LLM responsibility)
The runner does not perform any automated behavioral validation. After every foreground
start or rerun, Codex must manually validate all exit-0 entries:
- Read the example source (and comments) to infer intended flow, tools used, and expected key outputs.
- Open the matching per-example log under
..tmp/examples-start-logs/ - Confirm the intended actions/results occurred; flag omissions or divergences.
- Do this for all passed examples, not just a sample.
- Report immediately after the run with concise citations to the exact log lines that justify the validation.