Awesome-omni-skill codex-agent
Invoke OpenAI Codex CLI for coding and complex tasks. ALWAYS use the codex_agent_direct TOOL — NEVER exec codex directly. Direct codex exec opens an interactive TUI that hangs and gets killed. The tool handles non-interactive execution correctly.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/ai-agents/codex-agent" ~/.claude/skills/diegosouzapw-awesome-omni-skill-codex-agent && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/ai-agents/codex-agent" ~/.openclaw/skills/diegosouzapw-awesome-omni-skill-codex-agent && rm -rf "$T"
skills/ai-agents/codex-agent/SKILL.md- global npm install
Codex Agent — Direct Tool
CRITICAL: NEVER run
directly. Running codex without the tool opens an interactive TUI that hangs and gets killed with signal 9. Always use the exec codex ...
codex_agent_direct tool below.
Use your own model first. Only delegate to Codex when the user explicitly asks, or the task is large/multi-file.
Primary Method: codex_agent_direct
Tool
codex_agent_directThe
tool invokes Codex CLI non-interactively (codex_agent_direct
codex exec) with structured JSON output. This is the preferred method for all Codex interactions.
codex_agent_direct( prompt: "Your detailed task description", workspace: "/path/to/project", // optional, passed as --cd model: "gpt-4.1-mini", // optional, default "gpt-4.1-mini" mode: "agent" | "plan" | "ask" // optional, default "agent" )
Modes
- agent (default): Execute code changes and commands. Codex runs with
approval — it reads, edits, and runs commands in the workspace. Use for "implement", "refactor", "fix" tasks.auto - plan: Plan before executing. Passes
to--plan
. Codex proposes a plan before making changes. Use when the user says "plan first", "design approach", or for large/ambiguous tasks.codex exec - ask: Read-only / suggest-only. Codex is set to
approval mode — it explains and proposes but does not auto-run. Use for "explain", "where is", "why does" — no edits.suggest
When to Use
- User explicitly asks for "Codex", "openai codex", or "use Codex"
- Coding tasks: write, edit, refactor, review code
- Complex multi-file operations: Codex understands full codebase context
- Analysis: deep code analysis, dependency checking, architecture review
- Task is large, multi-file, or benefits from GPT-5 Codex reasoning
Model Selection
| Model | Use when |
|---|---|
| Default — fast, capable, cost-effective |
| Complex reasoning, large codebases |
| Maximum capability (requires ChatGPT Pro/Plus auth) |
Output
The tool returns structured JSON:
: Codex's text response (may include JSON transcript fromoutput
flag)--json
: 0 for successexit_code
: execution timeduration_ms
: list of new file paths detected after the runfiles_created
: true if output was capped at 100KBtruncated
There Is No Fallback — Handle Errors Directly
DO NOT exec codex under any circumstances. Running
codex or exec codex ... opens an interactive TUI that hangs forever and gets killed with signal 9. There is no safe direct exec fallback.
When the tool returns ok: false
or an error:
ok: false- Read the
orerror
field — it contains the failure reason (e.g. "not a trusted directory", "model not found")output - Report it to the user directly — e.g. "Codex failed: [reason]. You may need to run
interactively first to trust this directory."codex - Do NOT retry with exec, sessions_spawn, process polling, or any other approach
- Do NOT start a background process and poll it
When the tool returns empty output:
Report to the user: "Codex returned no output. The workspace may not be a trusted git repo, or Codex may need re-authentication."
When output is raw JSONL:
The tool parses JSONL and returns clean text. If you still see raw
{"type":...} JSON in the output, just summarize what you can see and tell the user the response format was unexpected.
Common errors and what to tell the user:
| Error | Tell user |
|---|---|
| Run interactively in the workspace first to trust it, or pass |
| Your ChatGPT account uses — omit to use config default |
| Codex authentication may have expired — run interactively to re-login |
/ process killed | The codex TUI was opened instead of non-interactive exec — this means exec was called directly, which is forbidden |
Prerequisites
- Codex CLI installed:
npm i -g @openai/codex - Authenticated: run
once interactively to sign in with your ChatGPT account or API keycodex - Verify:
python3 plugin/scripts/codex_agent_direct.py --check
Project Config
The project includes
.codex/config.toml with sensible defaults:
- Model:
gpt-4.1-mini - Approval:
(Codex asks before running commands)on-request - Web search:
cached
Override per-run with
--model or --approval flags.
Do NOT
- Do NOT run
interactively from exec — it opens a TUI and hangscodex - Do NOT use
without thecodex
subcommand for automationexec - Do NOT expose API keys in prompts or command arguments