Awesome-vibe-coding writing-plans

Use when you have a spec or requirements for a multi-step task, before touching code

install
source · Clone the upstream repo
git clone https://github.com/adriannoes/awesome-vibe-coding
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/adriannoes/awesome-vibe-coding "$T" && mkdir -p ~/.claude/skills && cp -r "$T/cursor/skills/writing-plans" ~/.claude/skills/adriannoes-awesome-vibe-coding-writing-plans && rm -rf "$T"
manifest: cursor/skills/writing-plans/SKILL.md
source content

Writing Plans

Source: obra/superpowers (MIT) · v5.0.5

Overview

Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.

Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.

Announce at start: "I'm using the writing-plans skill to create the implementation plan."

Context: This should be run in a dedicated worktree (created by brainstorming skill).

Save plans to:

docs/superpowers/plans/YYYY-MM-DD-.md
(or
docs/plans/<name>.md
if no superpowers layout)

Scope Check

If the spec covers multiple independent subsystems, it should have been broken into sub-project specs during brainstorming. If it wasn't, suggest breaking this into separate plans — one per subsystem. Each plan should produce working, testable software on its own.

File Structure

Before defining tasks, map out which files will be created or modified and what each one is responsible for. This is where decomposition decisions get locked in.

  • Design units with clear boundaries and well-defined interfaces. Each file should have one clear responsibility.
  • You reason best about code you can hold in context at once, and your edits are more reliable when files are focused. Prefer smaller, focused files over large ones that do too much.
  • Files that change together should live together. Split by responsibility, not by technical layer.
  • In existing codebases, follow established patterns. If the codebase uses large files, don't unilaterally restructure - but if a file you're modifying has grown unwieldy, including a split in the plan is reasonable.

This structure informs the task decomposition. Each task should produce self-contained changes that make sense independently.

Bite-Sized Task Granularity

Each step is one action (2-5 minutes):

  • "Write the failing test" - step
  • "Run it to make sure it fails" - step
  • "Implement the minimal code to make the test pass" - step
  • "Run the tests and make sure they pass" - step
  • "Commit" - step

Plan Document Header

Every plan MUST start with this header:

# [Feature Name] Implementation Plan

> **For agentic workers:** REQUIRED SUB-SKILL: Use subagent-driven-development (recommended) or executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.

**Goal:** [One sentence describing what this builds]

**Architecture:** [2-3 sentences about approach]

**Tech Stack:** [Key technologies/libraries]

---

Task Structure

Each task should include:

  • Files: Create/Modify/Test with exact paths
  • Steps: Checkbox format (
    - [ ]
    ) with complete code snippets
  • Commands: Exact commands with expected output

Example structure:

### Task N: [Component Name]

**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`

- [ ] **Step 1: Write the failing test**

```python
def test_specific_behavior():
    result = function(input)
    assert result == expected
```

- [ ] **Step 2: Run test to verify it fails**

Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"

- [ ] **Step 3: Write minimal implementation**
- [ ] **Step 4: Run test to verify it passes**
- [ ] **Step 5: Commit**

Remember

  • Exact file paths always
  • Complete code in plan (not "add validation")
  • Exact commands with expected output
  • Reference relevant skills when plan says to
  • DRY, YAGNI, TDD, frequent commits

Plan Review Loop

After writing the complete plan:

  1. Review the plan critically — identify gaps, unclear steps, or missing verifications
  2. If issues found: fix them and re-review
  3. If approved: proceed to execution handoff

Review guidance:

  • Same agent that wrote the plan fixes it (preserves context)
  • If loop exceeds 3 iterations, surface to human for guidance

Execution Handoff

After saving the plan, offer execution choice:

"Plan complete and saved to

docs/plans/<name>.md
. Two execution options:

1. Subagent-Driven (recommended) - Dispatch a fresh subagent per task, review between tasks, fast iteration

2. Inline Execution - Execute tasks in this session using executing-plans, batch execution with checkpoints

Which approach?"

If Subagent-Driven chosen:

  • Use subagent-driven-development skill (or mcp_task with appropriate subagent)
  • Fresh subagent per task + two-stage review

If Inline Execution chosen:

  • Use executing-plans skill
  • Batch execution with checkpoints for review