install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/amdf01-debug/sw-output-driven-dev" ~/.claude/skills/clawdbot-skills-sw-output-driven-dev && rm -rf "$T"
manifest:
skills/amdf01-debug/sw-output-driven-dev/SKILL.mdsource content
Output-Driven Development Skill
Trigger
Define success criteria and verification BEFORE coding. Agents prove their work.
Trigger phrases: "define success criteria", "output-driven", "verify before done", "prove it works", "acceptance criteria"
Process
- Define output: What exactly should the result look like?
- Write verification: How will we prove it works?
- Build: Implement the solution
- Verify: Run verification, show evidence
- Ship: Only after verification passes
Template
# Task: [Description] ## Success Criteria - [ ] [Specific, measurable criterion 1] - [ ] [Specific, measurable criterion 2] - [ ] [Specific, measurable criterion 3] ## Verification Plan For each criterion, how to verify: 1. [Run command X, expect output Y] 2. [Open URL, see element Z] 3. [Check file, contains content W] ## Build Log [What was implemented and how] ## Verification Results - Criterion 1: ✅ PASS — [evidence] - Criterion 2: ✅ PASS — [evidence] - Criterion 3: ❌ FAIL — [what went wrong, fix plan]
Rules
- Never claim "done" without showing verification evidence
- "Should work" is not verification — run it and show the output
- If you can't define success criteria, you don't understand the task
- Verification should be reproducible by anyone
- Failed verification → fix → re-verify (don't skip)
- Screenshots, logs, test output > "I checked and it works"