Compound-engineering-plugin ce-doc-review
Review requirements or plan documents using parallel persona agents that surface role-specific issues. Use when a requirements document or plan document exists and the user wants to improve it.
git clone https://github.com/EveryInc/compound-engineering-plugin
T=$(mktemp -d) && git clone --depth=1 https://github.com/EveryInc/compound-engineering-plugin "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/compound-engineering/skills/ce-doc-review" ~/.claude/skills/everyinc-compound-engineering-plugin-ce-doc-review && rm -rf "$T"
plugins/compound-engineering/skills/ce-doc-review/SKILL.mdDocument Review
Review requirements or plan documents through multi-persona analysis. Dispatches specialized reviewer agents in parallel, auto-applies
safe_auto fixes, and routes remaining findings through a four-option interaction (per-finding walk-through, LFG, Append-to-Open-Questions, Report-only) for user decision.
Interactive mode rules
- Pre-load the platform question tool before any question fires. In Claude Code,
is a deferred tool — its schema is not available at session start. At the start of Interactive-mode work (before the routing question, per-finding walk-through questions, bulk-preview Proceed/Cancel, and Phase 5 terminal question), callAskUserQuestion
with queryToolSearch
to load the schema. Load it once, eagerly, at the top of the Interactive flow — do not wait for the first question site. On Codex (select:AskUserQuestion
) and Gemini (request_user_input
) this step is not required; the tools are loaded by default.ask_user - The numbered-list fallback only applies on confirmed load failure. Presenting options as a numbered list and waiting for the user's reply is valid only when
returns no match or the tool call explicitly fails. Rendering a question as narrative text because the tool feels inconvenient, because the model is in report-formatting mode, or because the instruction was buried in a long skill is a bug. A question that calls for a user decision must either fire the tool or fail loudly.ToolSearch
Phase 0: Detect Mode
Check the skill arguments for
mode:headless. Arguments may contain a document path, mode:headless, or both. Tokens starting with mode: are flags, not file paths — strip them from the arguments and use the remaining token (if any) as the document path for Phase 1.
If
mode:headless is present, set headless mode for the rest of the workflow.
Headless mode changes the interaction model, not the classification boundaries. ce-doc-review still applies the same judgment about which tier each finding belongs in. The only difference is how non-safe_auto findings are delivered:
fixes are applied silently (same as interactive)safe_auto
,gated_auto
, and FYI findings are returned as structured text for the caller to handle — no AskUserQuestion prompts, no interactive routingmanual- Phase 5 returns immediately with "Review complete" (no routing question, no terminal question)
The caller receives findings with their original classifications intact and decides what to do with them.
Callers invoke headless mode by including
mode:headless in the skill arguments, e.g.:
Skill("ce-doc-review", "mode:headless docs/plans/my-plan.md")
If
mode:headless is not present, the skill runs in its default interactive mode with the routing question, walk-through, and bulk-preview behaviors documented in references/walkthrough.md and references/bulk-preview.md.
Phase 1: Get and Analyze Document
If a document path is provided: Read it, then proceed.
If no document is specified (interactive mode): Ask which document to review, or find the most recent in
docs/brainstorms/ or docs/plans/ using a file-search/glob tool (e.g., Glob in Claude Code).
If no document is specified (headless mode): Output "Review failed: headless mode requires a document path. Re-invoke with: Skill("ce-doc-review", "mode:headless <path>")" without dispatching agents.
Classify Document Type
After reading, classify the document:
- requirements -- from
, focuses on what to build and whydocs/brainstorms/ - plan -- from
, focuses on how to build it with implementation detailsdocs/plans/
Select Conditional Personas
Analyze the document content to determine which conditional personas to activate. Check for these signals:
product-lens -- activate when the document makes challengeable claims about what to build and why, or when the proposed work carries strategic weight beyond the immediate problem. The system's users may be end users, developers, operators, maintainers, or any other audience -- the criteria are domain-agnostic. Check for either leg:
Leg 1 — Premise claims: The document stakes a position on what to build or why that a knowledgeable stakeholder could reasonably challenge -- not merely describing a task or restating known requirements:
- Problem framing where the stated need is non-obvious or debatable, not self-evident from existing context
- Solution selection where alternatives plausibly exist (implicit or explicit)
- Prioritization decisions that explicitly rank what gets built vs deferred
- Goal statements that predict specific user outcomes, not just restate constraints or describe deliverables
Leg 2 — Strategic weight: The proposed work could affect system trajectory, user perception, or competitive positioning, even if the premise is sound:
- Changes that shape how the system is perceived or what it becomes known for
- Complexity or simplicity bets that affect adoption, onboarding, or cognitive load
- Work that opens or closes future directions (path dependencies, architectural commitments)
- Opportunity cost implications -- building this means not building something else
design-lens -- activate when the document contains:
- UI/UX references, frontend components, or visual design language
- User flows, wireframes, screen/page/view mentions
- Interaction descriptions (forms, buttons, navigation, modals)
- References to responsive behavior or accessibility
security-lens -- activate when the document contains:
- Auth/authorization mentions, login flows, session management
- API endpoints exposed to external clients
- Data handling, PII, payments, tokens, credentials, encryption
- Third-party integrations with trust boundary implications
scope-guardian -- activate when the document contains:
- Multiple priority tiers (P0/P1/P2, must-have/should-have/nice-to-have)
- Large requirement count (>8 distinct requirements or implementation units)
- Stretch goals, nice-to-haves, or "future work" sections
- Scope boundary language that seems misaligned with stated goals
- Goals that don't clearly connect to requirements
adversarial -- activate when the document contains:
- More than 5 distinct requirements or implementation units
- Explicit architectural or scope decisions with stated rationale
- High-stakes domains (auth, payments, data migrations, external integrations)
- Proposals of new abstractions, frameworks, or significant architectural patterns
Phase 2: Announce and Dispatch Personas
Announce the Review Team
Tell the user which personas will review and why. For conditional personas, include the justification:
Reviewing with: - ce-coherence-reviewer (always-on) - ce-feasibility-reviewer (always-on) - ce-scope-guardian-reviewer -- plan has 12 requirements across 3 priority levels - ce-security-lens-reviewer -- plan adds API endpoints with auth flow
Build Agent List
Always include:
document-review:ce-coherence-reviewerdocument-review:ce-feasibility-reviewer
Add activated conditional personas:
document-review:ce-product-lens-reviewerdocument-review:ce-design-lens-reviewerdocument-review:ce-security-lens-reviewerdocument-review:ce-scope-guardian-reviewerdocument-review:ce-adversarial-document-reviewer
Dispatch
Dispatch all agents in parallel using the platform's task/agent tool (e.g., Agent tool in Claude Code, spawn in Codex). Omit the
mode parameter so the user's configured permission settings apply. Each agent receives the prompt built from the subagent template included below with these variables filled:
| Variable | Value |
|---|---|
| Full content of the agent's markdown file |
| Content of the findings schema included below |
| "requirements" or "plan" from Phase 1 classification |
| Path to the document |
| Full text of the document |
| Cumulative prior-round decisions in the current session, or an empty block on round 1. See "Decision primer" below. |
Pass each agent the full document — do not split into sections.
Decision primer
On round 1 (no prior decisions), set
{decision_primer} to:
<prior-decisions> Round 1 — no prior decisions. </prior-decisions>
On round 2+ (after one or more prior rounds in the current interactive session), accumulate prior-round decisions and render them as:
<prior-decisions> Round 1 — applied (N entries): - {section}: "{title}" ({reviewer}, {confidence}) Evidence: "{evidence_snippet}" Round 1 — rejected (M entries): - {section}: "{title}" — Skipped because {reason} Evidence: "{evidence_snippet}" - {section}: "{title}" — Deferred to Open Questions because {reason or "no reason provided"} Evidence: "{evidence_snippet}" - {section}: "{title}" — Acknowledged without applying because {reason or "no suggested_fix — user acknowledged"} Evidence: "{evidence_snippet}" Round 2 — applied (N entries): ... </prior-decisions>
Each entry carries an
Evidence: line because synthesis R29 (rejected-finding suppression) and R30 (fix-landed verification) both use an evidence-substring overlap check as part of their matching predicate — without the evidence snippet in the primer, the orchestrator cannot compute the >50% overlap test and has to fall back to fingerprint-only matching, which either re-surfaces rejected findings or suppresses too aggressively. The {evidence_snippet} is the first evidence quote from the finding, truncated to the first ~120 characters (preserving whole words at the boundary) and with internal quotes escaped. If a finding has multiple evidence entries, use the first one; the rest live in the run artifact and are not needed for the overlap check.
Accumulate across all rounds in the current session. Skip, Defer, and Acknowledge actions all count as "rejected" for suppression purposes — each signals the user decided the finding wasn't worth actioning this round (Acknowledge is the no-fix-guard variant: the user saw a finding with no
suggested_fix, chose not to defer or skip explicitly, and recorded acknowledgement instead; for round-to-round suppression that is semantically equivalent to Skip). Applied findings stay on the applied list so round-N+1 personas can verify fixes landed (see R30 in references/synthesis-and-presentation.md).
Cross-session persistence is out of scope. A new invocation of ce-doc-review on the same document starts with a fresh round 1 and no carried primer, even if prior sessions deferred findings into the document's Open Questions section.
Error handling: If an agent fails or times out, proceed with findings from agents that completed. Note the failed agent in the Coverage section. Do not block the entire review on a single agent failure.
Dispatch limit: Even at maximum (7 agents), use parallel dispatch. These are document reviewers with bounded scope reading a single document -- parallel is safe and fast.
Phases 3-5: Synthesis, Presentation, and Next Action
After all dispatched agents return, read
references/synthesis-and-presentation.md for the synthesis pipeline (validate, per-severity gate, dedup, cross-persona agreement boost, resolve contradictions, auto-promotion, route by three tiers with FYI subsection), safe_auto fix application, headless-envelope output, and the handoff to the routing question.
For the four-option routing question and per-finding walk-through (interactive mode), read
references/walkthrough.md. For the bulk-action preview used by LFG, Append-to-Open-Questions, and walk-through LFG-the-rest, read references/bulk-preview.md. Do not load these files before agent dispatch completes.
Included References
Subagent Template
@./references/subagent-template.md
Findings Schema
@./references/findings-schema.json