Nexus-agents implement-feature
git clone https://github.com/williamzujkowski/nexus-agents
T=$(mktemp -d) && git clone --depth=1 https://github.com/williamzujkowski/nexus-agents "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/implement-feature" ~/.claude/skills/williamzujkowski-nexus-agents-implement-feature && rm -rf "$T"
skills/implement-feature/SKILL.mdImplement Feature Skill
<!-- CANONICAL SOURCES: - CLAUDE.md Workflow Templates - docs/development/CONTRIBUTION_GUIDE.md - CODING_STANDARDS.md -->Full workflow: CONTRIBUTION_GUIDE.md
Pre-Implementation Checklist
- Verify context:
TZ='America/New_York' date && git status - Check/create GitHub issue:
gh issue list --state open - Check research registry if implementing a technique
Hard Design Decisions — Constraint-Divergent Design
When the problem has multiple plausible approaches (e.g., "event-driven or polling?", "synchronous vs queue-based?", "monolithic helper vs split modules?"), do not generate the same solution three times with different variable names. Instead, articulate 2–3 distinct constraints first, then sketch one solution per constraint and compare.
Anchor constraints in real tradeoffs:
- "minimize allocations" vs "minimize lines of code" vs "minimize external deps"
- "lowest latency" vs "lowest memory" vs "easiest to test"
- "fewest moving parts" vs "easiest to extend" vs "matches existing pattern X"
Three constraints that each eliminate ~70% of the solution space leave you searching ~2.7% of it — a focused region, not blind sampling. (Pattern adapted from
itigges22/ATLAS PlanSearch; the math is theirs.)
Apply this discipline only for genuinely-multivalent decisions. Indicators:
- Multiple plausible architectures where reviewers would reasonably disagree
- A wrong choice would be expensive to reverse (data-shape migration, public API surface, cross-package boundary)
- You catch yourself thinking "I'll just pick one and see what review says" — pick deliberately, by constraint, instead
Out of scope: routine work where the approach is dictated by canonical paths or existing patterns. Don't over-apply.
Implementation Process
Phase 1: Interface First
// Define interface FIRST interface IFeature { method(input: Input): Promise<Result<Output, Error>>; }
See CONTRIBUTION_GUIDE.md for boundary checklist.
Phase 2: TDD
- Write failing test
- Run:
pnpm test -- --watch - Implement to pass
Phase 3: Quality Gates
pnpm lint && pnpm typecheck && pnpm test
Phase 4: Commit and PR
git commit -m "feat(scope): description Closes #<issue> Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>" gh pr create --title "feat(scope): description" --base main
Quality Checklist
See CODING_STANDARDS.md for full checklist.
- Tests pass, coverage ≥ 80%
- Lint and types clean
- Files ≤ 400 lines, functions ≤ 50 lines
- Interface defined first
Implementation Complete Checklist
Before marking ANY technique or feature as "implemented", verify ALL of the following:
Code Requirements
- Code exists in specified
integration_files - All functions have explicit return types
- No
types (useany
instead)unknown
Quality Gates
-
passes with zero errorspnpm lint -
passespnpm typecheck -
passes (relevant tests)pnpm test
Documentation Updates (if research-related)
-
:docs/research/registry/techniques.yaml
,status: implemented
entrydecision_history -
:docs/research/registry/papers.yaml
updatedimplementation_status -
: Quick Stats updated if counts changeddocs/research/RESEARCH_INDEX.md
GitHub Tracking
- Implementation issue closed with summary comment
- PR merged (if applicable)
Do NOT mark as implemented if: tests fail, implementation is partial, or feature is behind a flag.