Crucible migrate
Autonomous migration planning and execution. Takes a migration target (framework upgrade, API version bump, dependency major version, deprecation removal) and produces a phased migration plan with compatibility verification, then optionally executes via build's refactor mode. Triggers on /migrate, 'migration plan', 'upgrade X from Y to Z', 'remove deprecated', 'major version bump'.
git clone https://github.com/raddue/crucible
T=$(mktemp -d) && git clone --depth=1 https://github.com/raddue/crucible "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/migrate" ~/.claude/skills/raddue-crucible-migrate && rm -rf "$T"
skills/migrate/SKILL.mdMigrate
Overview
<!-- CANONICAL: shared/dispatch-convention.md -->All subagent dispatches use disk-mediated dispatch. See
shared/dispatch-convention.md for the full protocol.
Autonomous migration planning and execution: analyzes a migration target, maps blast radius, decomposes into safe phases with compatibility layers, groups consumers into waves, then executes through build's refactor/feature mode with verification at each phase boundary.
Announce at start: "Running migrate on [migration target description]."
Skill type: Rigid -- follow exactly, no shortcuts.
Purpose: Bridge between prospector discovery ("you should modernize X") and build execution ("here is the plan, execute it"). Today that bridge is manual. /migrate makes it autonomous.
Two modes:
- Plan + Execute (default) -- produces a phased migration plan, then executes each phase through build
- Plan only -- produces the plan and saves it without executing
Invocation
/migrate "upgrade lodash from v3 to v4" # plan + execute /migrate --plan-only "upgrade React Router v5 to v6" # plan only, no execution /migrate --execute docs/plans/2026-03-23-lodash-migration-plan.md # execute existing plan /migrate --orgs org1,org2 "upgrade shared-auth v2 to v3" # cross-repo
Communication Requirement (Non-Negotiable)
Between every agent dispatch and every agent completion, output a status update to the user. This is NOT optional -- the user cannot see agent activity without your narration.
Every status update must include:
- Current phase -- Which pipeline phase you're in
- What just completed -- What the last agent reported
- What's being dispatched next -- What you're about to do and why
- Phase progress -- Which phases are done, in progress, or pending
After compaction: If you just experienced context compaction, follow the Compaction Recovery procedure, re-read state from the scratch directory, and output current status before continuing. Do NOT proceed silently.
Examples of GOOD narration:
"Phase 2 complete. Blast radius mapper found 14 direct consumers across 3 modules. Dispatching Phase Planner to decompose into migration phases."
"Phase 7, Wave 2 complete. 8/14 consumers migrated. Build reported all tests passing after Phase 3b. Proceeding to Wave 3 (4 consumers)."
This requirement exists because: Migrations are long-running and high-stakes. The user needs visibility into progress, blast radius, and phase outcomes to decide whether to continue or intervene.
Pipeline Status
Write a status file to
~/.claude/projects/<hash>/memory/pipeline-status.md at every narration point. This file is overwritten (not appended) and provides ambient awareness for the user in a second terminal.
Write Triggers
Write the status file at every point where the Communication Requirement mandates narration: before dispatch, after completion, phase transitions, health changes, escalations, and after compaction recovery.
Status File Format
The status file uses this structure (overwritten in full each time):
# Pipeline Status **Updated:** <current timestamp> **Started:** <timestamp from first write -- persisted across compaction> **Skill:** migrate **Phase:** <current phase, e.g. "3 -- Decompose into Phases"> **Health:** <GREEN|YELLOW|RED> **Suggested Action:** <omit when GREEN; concrete one-sentence action when YELLOW/RED> **Elapsed:** <computed from Started> ## Recent Events - [HH:MM] <most recent event> - [HH:MM] <previous event> (last 5 events, newest first)
Skill-Specific Body
Append after the shared header:
## Migration Progress | Phase | Description | Status | |-------|-------------|--------| | 0 | Pre-flight | DONE | | 1 | Analyze target | DONE | | 2 | Map blast radius | IN PROGRESS | | 3 | Decompose phases | PENDING | | 4 | Compatibility layer | PENDING | | 5 | Plan waves | PENDING | | G | User gate | PENDING | | 6 | Rollback points | PENDING | | 7 | Execute | PENDING | | 8 | Cleanup | PENDING | ## Blast Radius - Direct consumers: 14 - Cross-repo: 3 repos (if applicable) ## Execution (Phase 7) - Wave 1: 4/4 consumers DONE - Wave 2: 2/6 consumers IN PROGRESS - Phase test suite: PASSING
Health State Machine
Health transitions are one-directional within a phase: GREEN -> YELLOW -> RED. Phase boundaries reset to GREEN.
- Phase boundaries (reset to GREEN): each new phase
- YELLOW: quality gate round 3+ on any phase, phase retry in progress, medium-confidence decision
- RED: phase execution failure, test suite failure unresolved, stagnation in quality gate, compatibility layer test failure
When health is YELLOW or RED, include
**Suggested Action:** with a concrete, context-specific sentence.
Inline CLI Format
Output concise inline status alongside the status file write:
- Minor transitions (dispatch, completion): one-liner, e.g.
Phase 2 [blast radius] 14 consumers found | GREEN | 12m - Phase changes and escalations: expanded block with
separators--- - Health transitions: always expanded with old -> new health
Compaction Recovery
After compaction, before re-writing the status file:
- Read the existing
to recoverpipeline-status.md
timestamp andStarted
bufferRecent Events - Reconstruct phase, health, and skill-specific body from scratch directory state files
- Write the updated status file
- Output inline status to CLI
Model Allocation
| Agent | Model | Dispatch Method |
|---|---|---|
| Orchestrator | Opus | -- |
| Migration Analyzer | Opus | Agent tool (Explore) |
| Blast Radius Mapper | Sonnet | Agent tool (general-purpose) |
| Phase Planner | Opus | Task tool |
| Compatibility Layer Designer | Opus | Task tool |
| Consumer Wave Grouper | Sonnet | Task tool |
Scratch Directory
Canonical path:
~/.claude/projects/<project-hash>/memory/migrate/scratch/<run-id>/
Files:
-- migration target, mode (plan-only/full), orgs scopeinvocation.md
-- Phase 1 output (API delta, breaking changes, complexity)migration-analysis.md
-- Phase 2 output (impact manifest + consumer registry)blast-radius.md
-- Phase 3 output (ordered phases with safe stopping points)phase-plan.md
-- Phase 4 output (shim/adapter design)compatibility-spec.md
-- Phase 5 output (consumer wave assignments)wave-plan.md
-- consolidated plan (presented at user gate)migration-plan.md
-- Phase 6 output (per-phase rollback definitions)rollback-points.md
-- Phase 7 tracking (per-phase execution status)execution-status.json
-- per-phase build execution scratch (delegated to build's scratch)phase-N/
Stale cleanup: Delete scratch directories older than 48 hours (migrations run longer than most skills). Preserve directories where
execution-status.json shows any phase in executing or failed status.
Context Management
Follow spec's context budget management pattern:
- Preemptive context checkpoint between phases: Before starting a new phase, write current state to scratch directory so compaction recovery can resume
- Per-phase context lifecycle: Load only current phase's inputs. Do not carry forward raw outputs from completed phases -- use the structured scratch files instead
- Complex migrations (10+ phases): Use aggressive summarization of completed phases. Carry forward only the phase-plan.md and execution-status.json, not individual phase outputs
Compaction Recovery
Recovery procedure:
- Read
-- recover migration target, mode, scopeinvocation.md - Read
-- determine which phases are complete/in-progress/pendingexecution-status.json - Read
-- recover the approved planmigration-plan.md - For any phase in
status: restart that phase from the beginning (safe because build's refactor mode has its own rollback)executing - Resume processing from the next pending phase
Pipeline-Active Marker
Before any dispatch work, check for a crashed prior migration session:
- Check
(where<scratch>/.pipeline-active
is<scratch>
)~/.claude/projects/<hash>/memory/ - Not found: Write the pipeline-active marker (JSON with
set to current session ID,pipeline_id
set toskill
,"migrate"
set tophase
,"0"
set to current ISO-8601 timestamp,start_time
andscratch_dir
paths,dispatch_dir
frombranch
,git branch --show-current
frombaseline_sha
). Proceed to Phase 0.git rev-parse HEAD - Found, same
: Compaction recovery (existing behavior). Do not re-write the marker.pipeline_id - Found, different
: Previous migration session crashed. Check marker'spipeline_id
against current branch — if mismatched, warn the user which branch the crashed session was on. Present to user:branch"Previous migration session on branch [marker.branch] crashed. Start fresh? [yes]" Delete the stale marker. Write a fresh marker. Proceed to Phase 0. (Full replay orchestration for migrate is deferred -- detection and cleanup only for now.)
Marker cleanup: Delete
.pipeline-active after Phase 8 (Cleanup) completes successfully.
Phase 0: Pre-flight
Before any agent dispatch:
-
Consult cartographer (consult mode) -- load known module boundaries for blast radius mapping
-
Consult forge (feed-forward) -- check past lessons, especially prior migration outcomes
-
Handle
-- if specified, read the existing plan file, validate it has the expected structure (phases, consumer registry, rollback points), and skip to Phase 7.--execute -
Dispatch recon with consumer-registry module:
/recon task: "Map structure and consumers for migration: <migration target>" context: { target: "<migration-target-symbol>" } modules: ["consumer-registry"]Write recon's Investigation Brief to
. Write recon's Consumer Registry section toscratch/<run-id>/recon-brief.md
.scratch/<run-id>/consumer-registry-from-recon.mdOn recon failure: "Recon failed: [reason]. Blast Radius Mapper will discover consumers from scratch." Proceed without recon context -- Phase 2 falls back to full consumer discovery (existing behavior).
-
Write invocation.md to scratch directory with migration target, mode, and scope.
Phase 1: Analyze Migration Target
Dispatch the Migration Analyzer (Opus, Agent tool, Explore subagent) using
./migration-analyzer-prompt.md.
Input:
- Migration description from user
- Cartographer data (if available)
- Framework context from dependency manifests (following prospector's Phase 0.5 pattern: read package.json, *.csproj, requirements.txt, go.mod, Cargo.toml, etc.)
The analyzer investigates:
- Changelog / migration guide (CHANGELOG.md, MIGRATION.md, UPGRADING.md in repo or dependency)
- API diff between versions (old vs new type definitions, function signatures, endpoint contracts)
- Breaking changes (backward-incompatible removals or signature changes)
- Deprecation notices (what the new version removes that the old warned about)
- New capabilities (additions consumers may want to adopt during migration)
Output: Structured migration analysis written to
scratch/<run-id>/migration-analysis.md.
Complexity classification:
- Low: <5 breaking changes, <10 consumers, no behavioral changes (pure API rename/reorganization)
- Medium: 5-20 breaking changes, 10-50 consumers, some behavioral changes
- High: 20+ breaking changes, 50+ consumers, significant behavioral changes, cross-repo scope
Phase 2: Map Blast Radius
Dispatch the Blast Radius Mapper (Sonnet, Agent tool, general-purpose) using
./blast-radius-mapper-prompt.md.
Input: Migration analysis from Phase 1, cartographer module data (if available), recon consumer registry (if available from Phase 0).
When recon's consumer registry is available: The mapper receives pre-discovered direct consumers and focuses on transitive dependencies, test coverage, and configuration/wiring. Direct consumer discovery is skipped -- the mapper verifies and augments the registry instead.
When recon's consumer registry is NOT available (recon failed): The mapper falls back to full discovery (existing behavior).
Intra-repo mapping (follows build refactor mode's blast radius analysis pattern):
- Direct consumers -- from recon's consumer registry (or discovered from scratch if recon failed)
- Indirect dependents -- code that depends on direct consumers (transitive)
- Test coverage -- which tests exercise the target behavior
- Configuration/wiring -- config files, DI registrations, build scripts referencing the target
Output: Impact manifest + consumer registry written to
scratch/<run-id>/blast-radius.md.
Consumer registry entry format:
- consumer: <file path or org/repo> usage_pattern: "calls TargetClass.method(args)" migration_complexity: low|medium|high independent: true|false reason_if_dependent: "shares state with <other consumer>"
Phase 3: Decompose into Phases
Dispatch the Phase Planner (Opus, Task tool) using
./phase-planner-prompt.md.
Input: Migration analysis + blast radius + consumer registry.
The planner produces an ordered list of migration phases. Each phase must satisfy the safe stopping point invariant: after completing the phase, the codebase compiles, all tests pass, and both old and new code paths function correctly.
Standard phase template (adapted by planner based on migration type):
| Phase | Description | Build Mode | Typical Content |
|---|---|---|---|
| 1 | Introduce new version | Feature | Add new dependency alongside old |
| 2 | Add compatibility layer | Feature | Create shims/adapters |
| 3a-3N | Migrate consumer waves | Refactor | Update consumers wave-by-wave |
| 4 | Remove compatibility layer | Refactor | Delete shims once all consumers migrated |
| 5 | Remove old version | Refactor | Delete old dependency |
Each phase entry includes:
- Phase number and description
- Affected files/repos
- Build mode (feature or refactor)
- Estimated effort (Low/Medium/High)
- Safe stopping point verification criteria
- Dependencies on prior phases
Legacy Migration Patterns
The planner must verify the phase plan against these operational patterns. These address the human and organizational side of migration that the technical decomposition doesn't cover.
-
Map the territory first — Understand what the legacy system actually does, not what it was designed to do. Hidden workflows, tribal knowledge in column headers, undocumented behaviors ARE the requirements spec. If
with/recon
was run, the technical mapping exists — but the planner should flag operational unknowns (manual processes, workarounds, tribal knowledge) as risks requiring user confirmation before cutover phases.consumer-registry -
Build alongside, not on top of — The new system must run in parallel with the old. Both are live during migration. Users try the new while the old is a safety net. The phase plan must include a coexistence period — never require a hard cutover without one. If Phase 2 (compatibility layer) was skipped, the planner must justify why parallel operation is unnecessary.
-
Cut over by group, not by system — Migration happens one team/department at a time, not all at once. The phase plan should identify rollout groups (who moves when) in addition to technical phases. Wave 3a-3N should map to user groups, not just code modules.
-
Don't migrate data unless you must — Historical data in the old system is fine where it is. The new system starts capturing from day one. If historical queries are needed, build a read-only bridge. The planner should flag any data migration phases as high-risk and verify that data migration is genuinely required (not just assumed).
-
Kill the old system explicitly — The phase plan must include an explicit decommission step: archive the old system and remove access. "Still available just in case" systems never die. This should be the final phase, with clear criteria for when it triggers (last user migrated, parallel period complete, no fallback queries in N days).
Output:
scratch/<run-id>/phase-plan.md
Phase 4: Design Compatibility Layer
Dispatch the Compatibility Layer Designer (Opus, Task tool) using
./compatibility-designer-prompt.md.
Input: Migration analysis (API delta) + phase plan (which phases need coexistence).
Skip condition: If Phase 3 determined no coexistence period is needed (e.g., simple in-place rename with no external consumers), skip this phase. Write "Compatibility layer: SKIPPED (no coexistence period required)" to scratch.
Output: Compatibility specification written to
scratch/<run-id>/compatibility-spec.md:
- Shim inventory -- list of adapters/facades with their interfaces
- Mapping -- old API call -> shim -> new API call (for each shim)
- Direction -- strangler fig (old-to-new) or facade (new-to-old)
- Tests -- what tests the shim needs (bidirectional correctness)
- Removal criteria -- when each shim can be safely deleted
Phase 5: Plan Consumer Waves
Dispatch the Consumer Wave Grouper (Sonnet, Task tool) using
./wave-grouper-prompt.md.
Input: Consumer registry from Phase 2 + phase plan from Phase 3.
Algorithm:
- Build dependency graph among consumers (consumer A depends on consumer B if A imports/calls B)
- Topological sort: consumers with no dependencies on other consumers go in Wave 1
- Consumers depending only on Wave 1 consumers go in Wave 2
- Continue until all consumers assigned
- Within each wave, verify independence (no consumer in the wave depends on another in the same wave)
Output: Wave assignments written to
scratch/<run-id>/wave-plan.md.
For cross-repo migrations: each wave entry includes repo name, estimated effort per repo, and CI pipeline considerations.
User Gate
After Phase 5, the orchestrator consolidates all outputs into
scratch/<run-id>/migration-plan.md.
Before presenting the plan to the user, dispatch
/assay for structured strategy evaluation:
/assay question: "Is this migration approach the best strategy for <migration target>?" context: { recon brief + migration analysis + blast radius summary } decision_type: "strategy"
On assay failure: "Assay evaluation failed: [reason]. Presenting plan without structured evaluation." Present the plan without assay's scoring (existing behavior).
Present the complete plan:
### Migration Plan: [target description] **Complexity:** [Low/Medium/High] **Phases:** N **Consumer waves:** M **Estimated total effort:** [estimate] **Cross-repo scope:** [yes/no, N repos] **Strategy Confidence:** [from assay — high/medium/low] **Kill Criteria:** [from assay — when to abort this migration approach] **Missing Information:** [from assay — what would increase confidence] #### Phase Summary [table of phases with descriptions, affected files, effort, build mode] #### Compatibility Layer [shim inventory or "not required"] #### Consumer Waves [wave assignments with independence verification] #### Rollback Strategy [per-phase rollback approach]
User may: approve, modify phases, reorder waves, exclude consumers, add consumers, or abort.
On approval: Save the plan to
docs/plans/YYYY-MM-DD-<topic>-migration-plan.md. If --plan-only mode: stop here, report success.
REQUIRED SUB-SKILL: Use crucible:quality-gate on the migration plan with artifact type "plan". Iterate until clean or stagnation. (Non-negotiable — see Quality Gate Requirement.)
Phase 6: Define Rollback Points
Orchestrator-local work (no agent dispatch).
For each phase in the approved plan, define:
- Rollback trigger -- what failure condition causes rollback
- Rollback scope -- which commits to revert
- Post-rollback verification -- which tests to run
- Impact on other phases -- does rolling back phase N invalidate phase N+1?
Rollback principles:
- Each phase is independently revertible (reverting phase N does not require reverting phases 1 through N-1)
- The compatibility layer MUST remain in place until the cleanup phase -- this is what makes intermediate rollback safe
- Cross-repo rollback: if repo A's migration fails in wave M, only repo A's wave M changes are reverted; other repos in wave M are unaffected if they migrated independently
Write to
scratch/<run-id>/rollback-points.md.
Phase 7: Execute via Build
Sequential phase execution. Phases are strictly sequential -- coexistence correctness depends on prior phase completion.
For each phase in approved plan: 1. Update execution-status.json: phase -> "executing" 2. Determine build mode (feature or refactor) from phase plan 3. Dispatch build with: - Mode: feature or refactor (from phase plan) - Design doc: phase description + affected files + compatibility spec (if applicable) - Scope: phase's file list 4. Build executes its full pipeline (design -> plan -> execute -> complete) 5. After build completes: - Run migration-specific verification: a. Both old and new API paths respond correctly (if coexistence phase) b. Compatibility layer tests pass (if shim exists) c. Consumer tests pass for migrated consumers - If verification passes: update execution-status.json: phase -> "complete" - If verification fails: update execution-status.json: phase -> "failed" - Execute rollback procedure from Phase 6 - Escalate to user with failure context 6. Quality-gate the phase output (artifact type: code) 7. Test-coverage audit on the phase's changes 8. Proceed to next phase
Between-phase gate: Full test suite must pass before proceeding to the next phase. This is stricter than build's intra-task review because migration phases have higher blast radius than individual refactoring tasks.
Phase 8: Cleanup
After all consumer waves complete:
- Remove compatibility layer (dispatch build refactor mode targeting shim files)
- Remove old version dependency (update dependency manifests)
- Run full test suite
- Quality-gate the cleanup changes
- Final migration-specific verification: only new API paths respond (old paths should fail gracefully or not exist)
Cleanup is a distinct phase because premature shim removal is the most common migration failure mode. Keeping it separate ensures the user explicitly approves shim removal.
Post-cleanup:
- Dispatch
(record mode) -- record migration discoveriescrucible:cartographer - Dispatch
(retrospective) -- capture migration outcome and lessonscrucible:forge - Delete
marker from the scratch directory.pipeline-active
Escalation Triggers
Escalate to the user when:
- Phase execution failure after rollback
- Quality gate stagnation on any phase
- Test suite failures not obviously related to the migration
- Cross-repo migration coordination failure (one repo fails while others succeed in the same wave)
- Compatibility layer tests fail (the shim is incorrect -- this is a design problem, not an execution problem)
- User-requested abort at any point
What the Orchestrator Must NOT Do
- Implement migration code directly -- dispatch build for all code changes
- Skip compatibility layer for "simple" migrations without analysis confirming it's unnecessary
- Execute phases in parallel -- phases are strictly sequential; coexistence depends on prior phase
- Remove compatibility layer before user-approved cleanup phase
- Proceed past the user gate without explicit approval
Integration
| Skill | How Used | When |
|---|---|---|
| Consumer-registry module | Phase 0 (structural context + direct consumer discovery). Fallback: Blast Radius Mapper discovers consumers from scratch. |
| Strategy evaluation | User Gate (structured evaluation of migration approach with kill criteria). Fallback: present plan without assay scoring. |
| Consult mode | Phase 0 (module boundaries for blast radius) |
| Record mode | Phase 8 (record migration discoveries) |
| Feed-forward | Phase 0 (past migration lessons) |
| Retrospective | Phase 8 (capture migration outcome) |
| Refactor mode | Phase 7 (per-phase execution for restructuring phases) |
| Feature mode | Phase 7 (per-phase execution for additive phases) |
| Per-phase gate | Phase 7 (artifact type: code, per phase) |
| Plan gate | After Phase 5 (artifact type: plan, on migration plan) |
| Per-phase audit | Phase 7 (test alignment after each phase) |
| Upstream | Prospector discovers "modernize X"; migrate plans the transition |
Prompt Templates
-- Phase 1 migration target analysis./migration-analyzer-prompt.md
-- Phase 2 consumer and dependency mapping./blast-radius-mapper-prompt.md
-- Phase 3 phase decomposition./phase-planner-prompt.md
-- Phase 4 shim/adapter design./compatibility-designer-prompt.md
-- Phase 5 consumer wave assignment./wave-grouper-prompt.md
Quality Gate Requirement (Non-Negotiable)
Every quality gate in this pipeline MUST run to completion. This is NOT optional — you may NOT self-assess whether a quality gate is "needed" based on migration step size, complexity, or perceived mechanical nature.
Migration work is especially vulnerable to "this is mechanical/boilerplate" rationalization. Mechanical changes still introduce bugs — mismatched imports, forgotten call sites, subtle behavioral differences in new APIs. Quality gates catch these regardless of how "simple" the migration step appears.
Fixing findings is NOT the same as passing the gate. The iteration loop must complete with a clean verification round (0 Fatal, 0 Significant on a fresh review).
The only valid skip is an unambiguous user instruction specifically referencing the gate. General feedback is not skip approval.
Gate tracking: Before compiling the migration summary, verify gate round counts by category:
plan (Phase 5), code-per-phase (Phase 7, one entry per executed phase), cleanup (Phase 8). Each must show round count >= 1 with clean final rounds. If any gate was skipped with explicit user approval, record it as USER_SKIP. A zero without user approval indicates a gate was dropped — report this in the summary.
Quality Gate Orchestration
| Pipeline Stage | Artifact Type | Purpose |
|---|---|---|
| After Phase 5 (plan consolidation) | plan | Verify migration plan completeness, phase boundary safety |
| Phase 7 (per-phase, after build completes) | code | Verify phase implementation correctness |
| Phase 8 (after cleanup) | code | Verify clean removal of compatibility layer |
Red Flags
Quality gate violations:
- Skipping a quality gate because the migration step is "mechanical" or "boilerplate"
- Self-assessing that a quality gate is unnecessary based on perceived migration step simplicity
- Declaring a quality gate "done" after fixing findings without a clean verification round (fixing is not passing)
- Short-circuiting the quality-gate iteration loop by assuming fixes are self-evidently correct
- Interpreting general user feedback as approval to skip a quality gate that has not yet run
Compression State violations:
- Skipping Compression State Block emission at checkpoint boundaries
- Emitting a Compression State Block with stale or missing Key Decisions (decisions must be cumulative across all prior blocks)
- Allowing the Goal field to drift across successive Compression State Blocks (must match original user request)
- Exceeding 10 entries in the Key Decisions list without overflow-compressing the oldest