Awesome-omni-skill ralph
Agent-agnostic autonomous loop creator. Use when asked to 'use ralph', 'ralph this', 'reverse ralph', 'decompose a feature', or '/ralph decompose'. Forward mode implements features end-to-end; decompose mode breaks existing features into atomic user stories for reimplementation.
git clone https://github.com/diegosouzapw/awesome-omni-skill
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/ai-agents/ralph" ~/.claude/skills/diegosouzapw-awesome-omni-skill-ralph && rm -rf "$T"
skills/ai-agents/ralph/SKILL.mdRalph - Agent-Agnostic Autonomous Loop Creator
Ralph creates autonomous coding loops that implement features by breaking them into small user stories and completing them one at a time. Each iteration spawns a fresh headless agent with clean context. Memory persists via git,
progress.txt, and prd.json. Ralph is not tied to any single AI agent — the user chooses which agent and model powers each loop.
Workflow
Step 1: Understand the Feature
If the user tagged a markdown file via
@, read it as the feature spec. Otherwise, ask the user to describe the feature.
Ask clarifying questions if needed:
- What problem does this solve?
- What are the key user actions?
- What's out of scope?
- How do we know it's done?
Step 2: Configure the Loop
Ask the user four questions to configure the loop:
2a. Which AI agent?
Present this list and ask the user to choose:
— Claude Codeclaude
— Factory Droiddroid
— OpenAI Codexcodex
— OpenCodeopencode
— Gemini CLIgemini
— GitHub Copilotcopilot
— Claude Code-compatible binary (user provides binary name, e.g.,cc-compatible
,zai
,minimax
)kimi
— Fully custom command (user provides entire command template withcustom
as placeholder)$PROMPT_FILE
Headless command templates per agent (used when generating the loop script):
# claude — Claude Code claude -p "$(cat "$PROMPT_FILE")" --dangerously-skip-permissions --model $MODEL # droid — Factory Droid droid exec --skip-permissions-unsafe -f "$PROMPT_FILE" --output-format text -m $MODEL # codex — OpenAI Codex codex exec --yolo -m $MODEL "$(cat "$PROMPT_FILE")" # opencode — OpenCode opencode run --yolo -m $MODEL "$(cat "$PROMPT_FILE")" # gemini — Gemini CLI gemini -p "$(cat "$PROMPT_FILE")" --yolo -m $MODEL # copilot — GitHub Copilot copilot -p "$(cat "$PROMPT_FILE")" --yolo --model $MODEL # cc-compatible — Same as Claude Code with user-provided binary name $BINARY -p "$(cat "$PROMPT_FILE")" --dangerously-skip-permissions --model $MODEL # custom — User provides entire command template # User's template must include $PROMPT_FILE where the prompt file path should go
If the user selects
cc-compatible, ask for the binary name (e.g., zai). If the user selects custom, ask for the full command template and instruct them to use $PROMPT_FILE where the prompt file path belongs.
2b. Which model?
Ask the user for the model identifier (free text input). Examples:
- Claude Code:
,opus
,claude-opus-4-6
,sonnetclaude-sonnet-4-5-20250929 - Factory Droid:
,claude-opus-4-6o3 - OpenAI Codex:
,o3o4-mini - OpenCode:
,anthropic/claude-opus-4-6
(format:openai/o3
)provider/model - Gemini CLI:
,gemini-2.5-progemini-2.0-flash - GitHub Copilot:
,claude-sonnet-4-5gpt-4o
If the user says "default" or leaves it blank, omit the model flag entirely (use the agent's built-in default).
2c. Loop name?
Suggest a name based on the feature in kebab-case (e.g.,
add-task-priorities). The user can accept or provide a different name. This becomes the filename: .ralph/<loop-name>.sh
2d. Auto-push and create PR?
Ask the user: "Should Ralph automatically push the branch and create a PR when the loop finishes?"
- Yes — When the loop ends (all stories complete or max iterations reached), the script will push the branch to
and create a pull request viaorigin
. The base branch (e.g.,gh pr create
ormain
) is auto-detected at generation time by checking which branch exists on the remote.master - No — The script only runs locally. All commits stay local. The user pushes and creates PRs themselves.
If the user says yes, detect the base branch now (at generation time) by checking:
- Does
exist? → userefs/remotes/origin/mainmain - Does
exist? → userefs/remotes/origin/mastermaster - Neither → default to
main
Bake the resolved base branch directly into the generated script as
DEFAULT_BRANCH="main" (or "master").
Step 3: Create prd.json
Generate a
prd.json file in the project root:
{ "project": "[Project Name]", "branchName": "ralph/[feature-name-kebab-case]", "description": "[Feature description]", "userStories": [ { "id": "US-001", "title": "[Story title]", "description": "As a [user], I want [feature] so that [benefit]", "acceptanceCriteria": [ "Criterion 1", "Criterion 2", "Typecheck passes" ], "priority": 1, "passes": false, "notes": "" } ] }
Step 4: Generate and Run the Loop
- Create
directory in the project root if it doesn't exist.ralph/ - Check
— add.gitignore
,.ralph/
, and.ralph-archive/
if missing (create.ralph-last-branch
if it doesn't exist).gitignore - Read
(the reference template in this skill) to understand the loop structurescripts/ralph.sh - Generate
using the reference template's structure, with the selected agent command baked in:.ralph/<loop-name>.sh- Use the exact command template from Step 2a, substituting the user's model from Step 2b
- If no model was specified, omit the model flag from the command
- Set
(co-located with the script)PROMPT_FILE="$SCRIPT_DIR/<loop-name>-prompt.md" - Set
to the selected executable (forAGENT_BIN
, use the command's executable token)custom - Keep all existing logic: archive, branch tracking, progress init, completion detection, 2s sleep between iterations
- If the user opted for auto-push+PR in Step 2d: include the
function withfinalize()
set to the resolved base branch, andDEFAULT_BRANCH
. Otherwise setAUTO_PUSH_PR="true"
and omit theAUTO_PUSH_PR="false"
function.finalize()
- Copy
(from this skill) →scripts/prompt.md.ralph/<loop-name>-prompt.md - Make the script executable:
chmod +x .ralph/<loop-name>.sh - Tell the user:
Run with: .ralph/<loop-name>.sh [max_iterations]
Critical Rules for User Stories
Size: Small but Substantive
Each story MUST be completable in ONE iteration. If you can't describe it in 2-3 sentences, it's too big. But each story must also involve meaningful work — if it's a single find-and-replace or a one-line edit, it's too small and should be combined with related work.
Too small (combine with related work):
- "Replace nvidia-smi with rocm-smi in one file" → combine into a broader documentation accuracy story
- "Add one missing env var to README" → combine with other doc gaps
- "Fix a typo in a config file" → combine with other config improvements
- Any story that a developer could complete in under 5 minutes
Right-sized:
- Add a database column and migration
- Add a UI component to an existing page
- Update a server action with new logic
- Add a filter dropdown to a list
- Fix a validation bug, add tests for the fix, and update docs
- Consolidate duplicated helper functions across multiple files
Too big (split these):
- "Build the entire dashboard" → schema, queries, UI components, filters
- "Add authentication" → schema, middleware, login UI, session handling
Order: Dependencies First
- Schema/database changes (migrations)
- Server actions / backend logic
- UI components that use the backend
- Dashboard/summary views
Acceptance Criteria: Verifiable
Good:
- "Add
column with default 'pending'"status - "Filter dropdown has options: All, Active, Completed"
- "Typecheck passes"
Bad:
- "Works correctly"
- "Good UX"
Always include:
on every story"Typecheck passes"
on UI stories"Verify in browser"
Example
User says: "use ralph to add task priorities"
Step 1: Read the feature description, ask clarifying questions.
Step 2: Ask: Which agent? →
claude. Which model? → sonnet. Loop name? → add-task-priorities. Auto-push+PR? → yes.
Step 3: Create prd.json:
{ "project": "TaskApp", "branchName": "ralph/task-priority", "description": "Add priority levels (high/medium/low) to tasks", "userStories": [ { "id": "US-001", "title": "Add priority field to database", "description": "As a developer, I need to store task priority.", "acceptanceCriteria": [ "Add priority column: 'high' | 'medium' | 'low' (default 'medium')", "Migration runs successfully", "Typecheck passes" ], "priority": 1, "passes": false, "notes": "" }, { "id": "US-002", "title": "Display priority badge on task cards", "description": "As a user, I want to see priority at a glance.", "acceptanceCriteria": [ "Colored badge: red=high, yellow=medium, gray=low", "Visible without hovering", "Typecheck passes", "Verify in browser" ], "priority": 2, "passes": false, "notes": "" } ] }
Step 4: Generate
.ralph/add-task-priorities.sh (with claude -p "$(cat "$PROMPT_FILE")" --dangerously-skip-permissions --model sonnet as the agent command, AUTO_PUSH_PR="true" and DEFAULT_BRANCH="main" since the user opted for auto-push+PR), copy prompt to .ralph/add-task-priorities-prompt.md, and add .ralph/, .ralph-archive/, and .ralph-last-branch to .gitignore.
prd.json created with 2 user stories. Run
to start autonomous execution..ralph/add-task-priorities.sh
How Ralph Executes
Each iteration, a fresh headless agent:
- Reads
andprd.jsonprogress.txt - Picks highest priority story where
passes: false - Implements it
- Runs quality checks (typecheck, lint, test)
- Commits if passing (or commits progress notes if stuck — see "If You Get Stuck" in prompt)
- Updates
to markprd.json
(or updatespasses: true
if blocked)notes - Appends learnings to
progress.txt - Exits
Loop continues until all stories pass or max iterations hit.
If the user opted for auto-push+PR (Step 2d), then after the loop ends (all stories complete or max iterations reached):
- The loop script pushes the branch to origin
- Creates a PR via
from the ralph branch to the default branchgh pr create - If
CLI is not available or PR creation fails, prints manual instructions instead of abortinggh - Push or PR failures are handled gracefully — they never mask a successful loop run
If the user chose local-only, the script simply exits after the loop. All commits remain local.
Files Reference
| File | Purpose |
|---|---|
| User stories with pass/fail status |
| Append-only learnings for future iterations |
| Generated loop script with agent command baked in |
| Prompt file for the loop (copy of ) |
Decompose Mode
Triggered by:
/ralph decompose <input> or natural language like "reverse ralph this feature",
"decompose X into a replication plan", "break this feature down into user stories".
What it does
Reverse Ralph takes any description of an existing feature and decomposes it — recursively and completely — into an atomized
prd.json that a forward Ralph loop can execute to
greenfield-reimplement the feature.
Decomposition is behavioral and functional: it captures what the feature does and how it behaves from the outside, not how it is built internally. The forward loop handles all implementation decisions.
Inputs accepted
- URLs (fetched and spidered up to 2 hops of linked documentation)
- Local file paths (read directly)
- Natural language descriptions
- Any combination of the above
Agent workflow
Follow the detailed instructions in
scripts/decompose-init-prompt.md for steps 1–7 below.
-
Gather inputs. Read all provided sources. For URLs, fetch the page and follow documentation links up to 2 hops deep (same documentation domain only). Synthesize all gathered content into a capability surface: a structured description of all observable behaviors, states, inputs, outputs, configuration options, and integrations of the feature.
-
Ask the loop name. Ask the user what to name this decomposition run. Used for the generated script filename:
..ralph/decompose-<n>.sh -
Ask which execution agent to use for the decomposition loop. Same agent matrix as forward Ralph:
,claude
,droid
,codex
,opencode
,gemini
,copilot
, orcc-compatible
.custom -
Ask which model (optional — leave blank to use the CLI default).
-
Seed
. Generate the initial state file with top-level capability clusters extracted from the capability surface. Each cluster gets statusdecomp.json
.needs_split -
Generate
from.ralph/decompose-<n>.sh
, substitutingscripts/decompose.sh
,__AGENT__
, and__MODEL__
. Copy__LOOP_NAME__
toscripts/decompose-prompt.md
..ralph/decompose-<n>-prompt.md -
Instruct the user to run:
.ralph/decompose-<n>.sh [max_iterations]Default
is 50. The loop runs autonomously until all leaf nodes inmax_iterations
have statusdecomp.json
(split parent nodes get statusatomic
), then emitssplit
.prd.json
Known Limitations
- Context window: The full
is appended to each iteration's prompt. For very large feature decompositions (hundreds of nodes), this may approach agent context limits. If this happens, increasedecomp.json
and let the loop resume across runs.max_iterations
Decompose Files Reference
| File | Purpose |
|---|---|
| Initialization prompt for the orchestrating agent (steps 1–7) |
| Decomposition state tree with nodes and status |
| Final output — forward-Ralph-compatible flat story list |
| Generated decomposition loop script |
| Prompt file for the decomposition loop |