NWave nw-distill
Acceptance test creation methodology for the DISTILL wave. Domain knowledge for the acceptance designer agent: port-to-port principle, prior wave reading, wave-decision reconciliation, graceful degradation, and document back-propagation.
git clone https://github.com/nWave-ai/nWave
T=$(mktemp -d) && git clone --depth=1 https://github.com/nWave-ai/nWave "$T" && mkdir -p ~/.claude/skills && cp -r "$T/nWave/skills/nw-distill" ~/.claude/skills/nwave-ai-nwave-nw-distill && rm -rf "$T"
nWave/skills/nw-distill/SKILL.mdDISTILL Methodology: Acceptance Test Creation
This skill provides the acceptance designer's methodology for creating acceptance tests. The orchestrator controls the overall flow (agent dispatch, review gate, handoff) -- this skill focuses on HOW to create good acceptance tests.
Acceptance Criteria: Port-to-Port Principle
Every AC MUST name the driving port (entry point) through which the behavior is exercised. This enables port-to-port acceptance tests that make TBU (Tested But Unwired) defects structurally impossible.
Each AC includes:
- Observable outcome: what the user/system sees
- Driving port: the entry point that triggers the behavior (service, handler, endpoint, CLI command)
Without the driving port, a crafter can write correct code that is never wired into the system.
Features: "When user {action} via {driving_port}, {observable_outcome}" Bug fixes: "When {trigger}, {modified_code_path} produces {correct_outcome} instead of {current_broken_behavior}"
Prior Wave Reading
Before writing any scenario, read SSOT and feature delta artifacts.
READING ENFORCEMENT: You MUST read every file listed in steps 1-6 below using the Read tool before proceeding. After reading, output a confirmation checklist (
+ {file} for each read, - {file} (not found) for missing). Do NOT skip files that exist.
- Read Journeys — Read
. Extract embedded Gherkin as starting scenarios, identify integration checkpoints anddocs/product/journeys/{name}.yaml
per step. Gate: file read or marked missing.failure_modes - Read Architecture Brief — Read
. Identify driving ports (fromdocs/product/architecture/brief.md
section) for## For Acceptance Designer
tagged scenarios. Gate: file read or marked missing.@driving_port - Read KPI Contracts — Read
. Identify behaviors needingdocs/product/kpi-contracts.yaml
tagged scenarios (soft gate — warn if missing, proceed). Gate: file read or marked missing.@kpi - Read DISCUSS Artifacts — Read
(scope boundary and embedded acceptance criteria),docs/feature/{feature-id}/discuss/user-stories.md
(walking skeleton priority and release slicing), andstory-map.md
(quick check for upstream changes). Gate: files read or marked missing.wave-decisions.md - Read SPIKE Findings (if spike was run) — Read
anddocs/feature/{feature-id}/spike/findings.md
. Check what assumptions were validated, what failed, performance measurements, and the promotion decision (PROMOTE / DISCARD / PIVOT). Update acceptance criteria if spike findings contradict DISCUSS. Gate: files read if present, marked as not found if absent. 5b. Read Walking Skeleton (only if SPIKE promoted a walking skeleton) — Read the existingdocs/feature/{feature-id}/spike/wave-decisions.md
and thetests/{test-type-path}/{feature-id}/acceptance/walking-skeleton.feature
modules it exercises. The walking skeleton is already committed and green — your job in DISTILL is to build additional scenarios and integration tests on top of it, not to rewrite it. Identify the driving adapter it uses, the e2e path it exercises, and the scenarios it does NOT yet cover (happy-path variants, error paths, adapter integration). Gate: walking-skeleton.feature read, scenario taggedsrc/
confirmed green, or marked as not found.@walking_skeleton - Read DEVOPS Artifacts — Read
. Check for infrastructure constraints affecting tests. Gate: file read or marked missing.docs/feature/{feature-id}/devops/wave-decisions.md - Check Migration Gate — If
does not exist butdocs/product/
has existing features, STOP. Guide the user todocs/feature/
. If greenfield, prior waves should have bootstrappeddocs/guides/migrating-to-ssot-model/README.md
already. Gate: migration confirmed or greenfield confirmed.docs/product/ - Reconcile Assumptions — Check whether any acceptance test assumptions contradict prior wave decisions or SPIKE findings. Use
andwave-decisions.md
files to detect upstream changes. Gate: zero contradictions or contradictions listed for user resolution.spike/findings.md
DISTILL is the conjunction point — it reads all three SSOT dimensions plus the feature delta to translate prior wave knowledge into executable acceptance tests.
Wave-Decision Reconciliation (Pre-Scenario Gate)
BEFORE writing any scenario, execute this reconciliation procedure:
- Read All Wave Decisions — Read ALL wave-decisions.md files from prior waves:
,docs/feature/{feature-id}/discuss/wave-decisions.md
,docs/feature/{feature-id}/design/wave-decisions.md
. Gate: all files read or marked missing.docs/feature/{feature-id}/devops/wave-decisions.md - Check Each DISCUSS Decision — For EACH decision in DISCUSS, check whether DESIGN or DEVOPS contradicts it. Examples: DISCUSS says "email notifications" but DESIGN says "in-app only" = CONTRADICTION; DISCUSS says "REST API" but DESIGN says "gRPC" = CONTRADICTION; DISCUSS says "single-tenant" but DEVOPS says "multi-tenant" = CONTRADICTION. Gate: all decisions checked.
- Handle Contradictions — If ANY contradiction is found: (a) list ALL contradictions with exact file paths and decision text, (b) BLOCK scenario writing until the user resolves each contradiction, (c) return
. Gate: zero contradictions, or user resolution received.{CLARIFICATION_NEEDED: true, questions: [{contradiction details}]} - Log Reconciliation Result — If zero contradictions: log "Reconciliation passed -- 0 contradictions" and proceed. Gate: log entry written.
Do NOT silently pick one side of a contradiction. Do NOT write scenarios against ambiguous specifications. The cost of blocking is minutes; the cost of implementing the wrong behavior is hours.
Graceful Degradation for Missing Upstream Artifacts
DEVOPS missing (no
docs/feature/{feature-id}/devops/ directory):
- Log Warning — Log: "DEVOPS artifacts missing -- using default environment matrix". Gate: warning logged.
- Apply Default Matrix — Use default environment matrix: clean | with-pre-commit | with-stale-config. Gate: matrix applied.
- Proceed — Continue with scenario writing. Do NOT block.
DISCUSS missing (no
docs/feature/{feature-id}/discuss/ directory):
- Log Warning — Log: "DISCUSS artifacts missing -- using DESIGN only". Gate: warning logged.
- Derive from DESIGN — Derive acceptance criteria from DESIGN architecture documents. Gate: criteria derived.
- Skip Traceability — Skip story-to-scenario traceability -- no stories to trace. Gate: traceability skipped.
- Proceed — Continue with scenario writing. Do NOT block.
DESIGN missing (no
docs/feature/{feature-id}/design/ directory):
- Log Warning — Log: "DESIGN artifacts missing -- driving ports unknown". Gate: warning logged.
- BLOCK for Driving Ports — Ask user to identify driving ports before writing any scenario. BLOCK until driving ports are identified -- without them, hexagonal boundary is unverifiable. Gate: user provides driving ports.
Missing artifacts trigger warnings, not failures -- EXCEPT when the missing artifact makes a design mandate unverifiable (DESIGN for hexagonal boundary). In that case, BLOCK.
Document Update (Back-Propagation)
When DISTILL work reveals gaps or contradictions in prior waves:
- Document Findings — Write findings in
. Reference the original prior-wave document and describe the gap. Gate: file written.docs/feature/{feature-id}/distill/upstream-issues.md - Flag Untestable Criteria — If acceptance criteria from DISCUSS are untestable as written, note the specific criteria and explain why. Gate: all untestable criteria flagged.
- Resolve Before Writing — Resolve contradictions with user before writing tests against ambiguous or contradictory requirements. Gate: user resolution received.
Walking Skeleton Strategy Decision (INTERACTIVE)
Before writing walking skeleton scenarios, determine the WS adapter strategy. Auto-detect from the feature's component types, then confirm with the user.
Decision Tree (auto-detect then user confirms):
Feature is pure domain (no driven ports with I/O)? -> Strategy A (Full InMemory) -- WS uses InMemory doubles only Feature has only local resources (filesystem, git, in-process subprocess)? -> Strategy C (Real local) -- WS uses real adapters for all local resources Feature has costly external dependencies (paid APIs, LLM calls, rate-limited services)? -> Strategy B (Real local + fake costly) -- real for local, fake for expensive Team needs different behavior in CI vs local development? -> Strategy D (Configurable) -- env var switches InMemory <-> Real
Resource Classification:
| Resource Type | WS Behavior | Adapter Integration Test |
|---|---|---|
| Filesystem | real (tmp_path) | real (tmp_path) -- ALWAYS |
| Git repo | real (tmp_path + git init) | real -- ALWAYS |
| Local subprocess (pytest, ruff) | real | real -- ALWAYS |
| Costly subprocess (claude -p, LLM) | fake (mock) | contract smoke (@requires_external) |
| Paid external API | fake server | contract test with recorded fixtures |
| Database | real (SQLite/testcontainers) | real -- ALWAYS |
| Container services | per user preference | real if available |
Container option: Ask the user if they want containerized environments for WS and integration tests:
- No container (real adapters on host)
- Docker Compose (local services)
- Testcontainers (programmatic, lifecycle managed by test)
- Auto-Detect Strategy — Classify feature components against the decision tree. Gate: strategy candidate identified.
- Confirm with User — Present the auto-detected strategy and ask user to confirm or override. Gate: strategy confirmed.
- Record Decision — Write the confirmed strategy in
as a numbered decision (e.g., DWD-XX: Walking Skeleton Strategy). Gate: decision recorded.distill/wave-decisions.md - Apply Strategy to Scenarios — Tag WS scenarios per the confirmed strategy: Strategy A uses
, Strategy B/D uses@in-memory
for local and@real-io
for costly externals, Strategy C uses@in-memory
for ALL resources. Gate: scenarios tagged correctly.@real-io
Tagging convention:
-- scenario uses real adapters@real-io
-- scenario uses InMemory doubles@in-memory
-- scenario needs external system (skip if absent)@requires_external- Walking skeleton under B/C/D: MUST have
@walking_skeleton @real-io
Driving Adapter Verification (Mandatory — RCA fix P1, 2026-04-10)
If the DESIGN document specifies a CLI entry point, HTTP endpoint, or hook adapter:
- At least ONE walking skeleton scenario MUST invoke it via its protocol — subprocess for CLI, HTTP request for API, hook JSON payload for hooks. Tag:
. Gate: scenario exists and exercises the user's actual invocation path.@driving_adapter @walking_skeleton - The scenario MUST verify: exit code (or HTTP status), output format (stdout/response body), and basic argument handling. Gate: all three verified.
- Pipeline/service-level tests do NOT replace driving adapter tests. A test that calls
directly proves the pipeline works but NOT that the CLI parses arguments, resolves PYTHONPATH, wires adapters, and produces correct exit codes. Both are needed.generate_matrix() - Scan DESIGN for entry points: grep design docs for
,python -m
,cli
,endpoint
. Each match needs at least one subprocess/HTTP/hook scenario. Gate: zero uncovered entry points.hook adapter
This section exists because of a systematic pattern (RCA
docs/analysis/rca-user-port-gap.md): acceptance tests entered from application services instead of user-facing CLIs, shipping features with working pipelines but broken entry points.
Adapter Scenario Coverage (Mandate 6 Enforcement)
When designing adapter acceptance scenarios, EVERY driven adapter has at least one scenario with real I/O (or contract smoke for costly externals). This is not optional regardless of WS strategy. Tag adapter real-I/O scenarios with
@real-io @adapter-integration.
- Inventory Adapters — List all driven adapters in the feature. Gate: adapter list complete.
- Map Scenarios to Adapters — For each adapter, identify existing scenarios that exercise it with real I/O. Gate: mapping complete.
- Produce Coverage Table — Output the adapter coverage table before completing Phase 2:
| Adapter | @real-io scenario | Covered by | |---------|-------------------|------------| | YamlWorkflowLoader | YES | WS (real YAML from tmp_path) | | FilesystemSkillReader | YES | WS (real skill files from tmp_path) | | SubprocessGitVerifier | NO — MISSING | Add: "Git verifier reads real git log" | | RuffLintRunner | NO — MISSING | Add: "Lint runner checks real ruff output" |
- Add Missing Scenarios — Every row with "NO — MISSING" MUST have a scenario added. If the adapter is for a costly external (claude -p), a
contract smoke test is acceptable instead. Gate: zero "NO — MISSING" rows remain.@requires_external
Cross-references: nw-tdd-methodology Mandate 5 (Walking Skeleton) and Mandate 6 (Real I/O), nw-quality-framework Dimension 9 (Walking Skeleton Integrity).
Self-Review Checklist (Dimension 9 + Mandate 7)
Before handing off to reviewers, self-check each item:
- 1. WS strategy declared in wave-decisions.md
- 2. WS scenarios tagged correctly (@real-io / @in-memory per strategy)
- 3. Every driven adapter has at least one @real-io scenario
- 4. For InMemory doubles: documented what they CANNOT model
- 5. Container preference documented if applicable
- 6. Mandate 7: All production modules imported by tests have scaffold files
- 10. Driving Adapter: Every CLI/endpoint/hook in DESIGN has at least one WS scenario exercising it via subprocess/HTTP/hook protocol (not just calling the service function)
- 7. Mandate 7: All scaffolds include
marker (or language equivalent)__SCAFFOLD__ - 8. Mandate 7: All scaffold methods raise assertion error (not NotImplementedError)
- 9. Mandate 7: Tests are RED (not BROKEN) when run against scaffolds
- 11. F-001: At least one
scenario per driven adapter (synthetic data misses format mismatches)@real-io @adapter-integration - 12. F-002:
used incapsys
step, NOT in@when
step (capsys is step-scoped in pytest-bdd)@then - 13. F-005:
steps import ONLY from@when
ordes.application.*
— NEVER fromdes.domain.*
. Rundes.adapters.driven.*
to verify.python scripts/hooks/check_driving_port_boundary.py - 14. F-004: Timing assertions in
files use budget >= 200ms (flaky under parallel load).feature - 15. F-003: BDD imports after
manipulation havesys.path
markers (ruff strips them otherwise)# noqa
Scenario Writing Guidelines
Walking Skeleton First (or inherited from SPIKE)
If SPIKE ran and PROMOTED a walking skeleton, DISTILL inherits it: do NOT rewrite it, do NOT add duplicate scenarios, do NOT change its
@walking_skeleton tag. Your job is to add the next layer of scenarios (additional happy paths, error paths, adapter integration) that build on the skeleton's established driving adapter and e2e path.
If SPIKE was skipped or did not promote, DISTILL creates the walking skeleton scenarios itself, before milestone features. Walking skeleton scenarios exercise the end-to-end path through driving adapters (real user-facing entry → real driven adapters → real user-visible output) with minimal business logic. Features only; optional for bugs.
Either way, there is exactly ONE walking skeleton scenario per feature marked
@walking_skeleton, and it must be green before DISTILL hand-off.
One-at-a-Time Strategy
Tag non-skeleton scenarios with @skip/@pending for one-at-a-time implementation. Each scenario maps to one TDD cycle in DELIVER. The crafter enables one scenario at a time.
Business Language Purity
Feature files use business language only. No technical terms (API, database, endpoint, schema) in scenario names or Given/When/Then steps. Technical details live in step definitions, not feature files.
Error Path Coverage
Target at least 40% error/edge case scenarios. Pure happy-path test suites miss the most common production failures. For every happy path, ask: "What happens when this input is invalid? When the dependency is unavailable? When the user cancels midway?"
Environment-Aware Scenarios
When DEVOPS provides environment inventory, create at least one walking skeleton scenario per environment. Each environment has different preconditions (clean install vs. upgrade vs. stale config) that affect behavior.
Mandate 7: RED-Ready Scaffolding
Every acceptance test MUST be RED, not BROKEN, when first created.
When DISTILL writes acceptance tests that import production modules not yet implemented, it MUST also create minimal stub files so that:
- All imports succeed (no ImportError -- no BROKEN classification)
- Method calls raise AssertionError (-- RED classification)
- The Red Gate Snapshot classifies the test as RED, enabling the DELIVER TDD cycle
What to scaffold
For each production module imported in step definitions:
- Create Module File — Create the module file at the correct path (e.g.,
). Gate: file created.src/app/plugin/installer.py - Add Scaffold Marker — Include the scaffold marker (
or language equivalent) for machine detection. Gate: marker present.__SCAFFOLD__ = True - Define Signatures — Define the class/function with the correct parameter signature. Gate: signatures match what step definitions expect.
- Raise Assertion Error — Method bodies MUST raise an assertion error with the scaffold marker message. Gate: all methods raise AssertionError (not NotImplementedError).
- Verify RED Classification — Confirm the test runner classifies tests as RED, not BROKEN. Gate: RED confirmed.
Language-specific scaffolding
The principle is universal: raise an exception classified as assertion failure (RED), not infrastructure error (BROKEN).
Python:
# src/app/plugin/installer.py """Plugin installer -- RED scaffold (created by DISTILL).""" __SCAFFOLD__ = True class PluginInstaller: def __init__(self, **kwargs): pass def install(self, ctx): raise AssertionError("Not yet implemented -- RED scaffold")
Rust:
// src/plugin/installer.rs // SCAFFOLD: true pub struct PluginInstaller; impl PluginInstaller { pub fn install(&self) -> Result<(), Box<dyn std::error::Error>> { panic!("Not yet implemented -- RED scaffold") } }
Go:
// plugin/installer.go // SCAFFOLD: true package plugin func Install() error { panic("not yet implemented -- RED scaffold") }
TypeScript/JavaScript:
// src/plugin/installer.ts export const __SCAFFOLD__ = true; export class PluginInstaller { install(): never { throw new Error("Not yet implemented -- RED scaffold"); } }
Java:
// src/plugin/PluginInstaller.java // SCAFFOLD: true public class PluginInstaller { public void install() { throw new AssertionError("Not yet implemented -- RED scaffold"); } }
Scaffold detection
DELIVER uses the scaffold marker to track progress:
(Python, TypeScript)grep -r "__SCAFFOLD__" src/
(Rust, Go, Java)grep -r "SCAFFOLD: true" src/
After all DELIVER steps complete, zero scaffold markers should remain.
Why assertion errors (not NotImplementedError)
The Red Gate Snapshot (
src/des/application/red_gate_snapshot.py) classifies failures by error type:
/AssertionError
/panic!
-- RED (implementation missing, test correct)throw Error
-- BROKEN (infrastructure issue)NotImplementedError
/ImportError
-- BROKEN (module missing)ModuleNotFoundError
Only RED tests proceed to the DELIVER TDD cycle. BROKEN tests block the upstream gate.
Scaffolding lifecycle
- DISTILL creates the scaffold (RED-ready stubs)
- Snapshot classifies the test as RED
- DELIVER replaces the scaffold with real implementation (GREEN)
The scaffold is never committed to production -- it exists only between DISTILL approval and DELIVER completion for each step.
Expected Outputs
tests/{test-type-path}/{feature-id}/acceptance/ walking-skeleton.feature milestone-{N}-{description}.feature integration-checkpoints.feature steps/ conftest.py {domain}_steps.py src/{production-path}/ {module}.py # RED scaffold stubs (Mandate 7) docs/feature/{feature-id}/distill/ walking-skeleton.md (notes only — the .feature file is the SSOT) wave-decisions.md
Note:
test-scenarios.md and acceptance-review.md are NOT produced — the .feature file under tests/{test-type-path}/{feature-id}/acceptance/ is the scenario SSOT, and reviewer output is ephemeral (lives in PR comments / retrospective, not as a committed artifact).
Bug fix regression tests:
tests/regression/{component-or-module}/ bug-{ticket-or-description}.feature steps/ conftest.py {domain}_steps.py tests/unit/{component-or-module}/ test_{module}_bug_{ticket-or-description}.py