Nexus-agents implement-feature

install
source · Clone the upstream repo
git clone https://github.com/williamzujkowski/nexus-agents
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/williamzujkowski/nexus-agents "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/implement-feature" ~/.claude/skills/williamzujkowski-nexus-agents-implement-feature && rm -rf "$T"
manifest: skills/implement-feature/SKILL.md
source content

Implement Feature Skill

<!-- CANONICAL SOURCES: - CLAUDE.md Workflow Templates - docs/development/CONTRIBUTION_GUIDE.md - CODING_STANDARDS.md -->

Full workflow: CONTRIBUTION_GUIDE.md

Pre-Implementation Checklist

  1. Verify context:
    TZ='America/New_York' date && git status
  2. Check/create GitHub issue:
    gh issue list --state open
  3. Check research registry if implementing a technique

Hard Design Decisions — Constraint-Divergent Design

When the problem has multiple plausible approaches (e.g., "event-driven or polling?", "synchronous vs queue-based?", "monolithic helper vs split modules?"), do not generate the same solution three times with different variable names. Instead, articulate 2–3 distinct constraints first, then sketch one solution per constraint and compare.

Anchor constraints in real tradeoffs:

  • "minimize allocations" vs "minimize lines of code" vs "minimize external deps"
  • "lowest latency" vs "lowest memory" vs "easiest to test"
  • "fewest moving parts" vs "easiest to extend" vs "matches existing pattern X"

Three constraints that each eliminate ~70% of the solution space leave you searching ~2.7% of it — a focused region, not blind sampling. (Pattern adapted from

itigges22/ATLAS
PlanSearch; the math is theirs.)

Apply this discipline only for genuinely-multivalent decisions. Indicators:

  • Multiple plausible architectures where reviewers would reasonably disagree
  • A wrong choice would be expensive to reverse (data-shape migration, public API surface, cross-package boundary)
  • You catch yourself thinking "I'll just pick one and see what review says" — pick deliberately, by constraint, instead

Out of scope: routine work where the approach is dictated by canonical paths or existing patterns. Don't over-apply.

Implementation Process

Phase 1: Interface First

// Define interface FIRST
interface IFeature {
  method(input: Input): Promise<Result<Output, Error>>;
}

See CONTRIBUTION_GUIDE.md for boundary checklist.

Phase 2: TDD

  1. Write failing test
  2. Run:
    pnpm test -- --watch
  3. Implement to pass

Phase 3: Quality Gates

pnpm lint && pnpm typecheck && pnpm test

Phase 4: Commit and PR

git commit -m "feat(scope): description

Closes #<issue>

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>"

gh pr create --title "feat(scope): description" --base main

Quality Checklist

See CODING_STANDARDS.md for full checklist.

  • Tests pass, coverage ≥ 80%
  • Lint and types clean
  • Files ≤ 400 lines, functions ≤ 50 lines
  • Interface defined first

Implementation Complete Checklist

Before marking ANY technique or feature as "implemented", verify ALL of the following:

Code Requirements

  • Code exists in specified
    integration_files
  • All functions have explicit return types
  • No
    any
    types (use
    unknown
    instead)

Quality Gates

  • pnpm lint
    passes with zero errors
  • pnpm typecheck
    passes
  • pnpm test
    passes (relevant tests)

Documentation Updates (if research-related)

  • docs/research/registry/techniques.yaml
    :
    status: implemented
    ,
    decision_history
    entry
  • docs/research/registry/papers.yaml
    :
    implementation_status
    updated
  • docs/research/RESEARCH_INDEX.md
    : Quick Stats updated if counts changed

GitHub Tracking

  • Implementation issue closed with summary comment
  • PR merged (if applicable)

Do NOT mark as implemented if: tests fail, implementation is partial, or feature is behind a flag.