Crucible assay
Recon-informed approach evaluator. Weighs competing options against codebase constraints and returns structured recommendations with confidence scoring, kill criteria, and evidence grounding. Consumes recon briefs or caller context. Used by design, spec, migrate. Triggers on /assay, 'evaluate approaches', 'which option', 'compare alternatives'.
git clone https://github.com/raddue/crucible
T=$(mktemp -d) && git clone --depth=1 https://github.com/raddue/crucible "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/assay" ~/.claude/skills/raddue-crucible-assay && rm -rf "$T"
skills/assay/SKILL.mdAssay
Overview
<!-- CANONICAL: shared/dispatch-convention.md -->All subagent dispatches use disk-mediated dispatch. See
shared/dispatch-convention.md for the full protocol.
Evaluate competing approaches against codebase constraints. Returns a structured Assay Report with a recommendation, alternatives with kill criteria, and confidence scoring. Evidence-grounded — recommendations cite specific file:line references, not generic best practices.
Skill type: Rigid — follow exactly, no shortcuts.
Models:
- Evaluator agent: Opus (synthesis/judgment work needs the best model)
- Orchestrator: runs on whatever model the session uses
Announce at start: "I'm using the assay skill to evaluate competing approaches."
Name origin: In metallurgy, an assay tests raw material to determine its quality and composition before committing it to the forge.
Invocation API
/assay question: "How should the auth middleware handle token refresh?" context: { ... } decision_type: "architecture" approaches: [...] cascading_decisions: [...]
Parameters
(required) — The decision or question to evaluate. One clear sentence.question
(required) — Evidence for the evaluator to reason against. Accepts different shapes depending on the caller:context
| Caller | Context Shape | Key Fields |
|---|---|---|
| Recon brief + agent findings | , , , |
| Recon brief + agent findings (autonomous) | , , , |
| Recon brief + migration analysis | , , , |
| Generic caller | Freeform evidence | (string) — unstructured context, lower confidence |
When context contains unrecognized keys, the evaluator treats them as additional evidence. When context is a bare string, treat as
{ "description": context }.
(optional) — decision_type
architecture | strategy | diagnosis | optimization. Auto-detected from the question if omitted. Defaults to architecture when ambiguous.
(optional) — Array of approaches
{ name, description } candidates to evaluate. When omitted, the evaluator generates 2-4 candidates from the question and context.
(optional) — Array of cascading_decisions
{ decision, reasoning } representing prior decisions. Treated as hard constraints — the evaluator cannot modify or challenge them. Conflicts are reported in prior_decision_conflicts.
The Process
Phase 1: Input Validation
- Verify
is present and non-emptyquestion - Verify
is present (object or string)context - If
is provided, validate it's one of the 4 recognized valuesdecision_type - If
is provided, verify it's an array with at least 2 entries, each havingapproaches
andnamedescription
Phase 2: Dispatch Evaluator
Dispatch a single Opus agent using
skills/assay/assay-evaluator-prompt.md.
Fill template placeholders before writing the dispatch file:
— the decision question{{QUESTION}}
— the full context object/string{{CONTEXT}}
— the decision type (provided or "auto-detect"){{DECISION_TYPE}}
— the approaches array (or "Generate 2-4 candidates"){{APPROACHES}}
— cascading decisions array (or "None"){{CASCADING_DECISIONS}}
Phase 3: Validate Output
Parse the evaluator's response as JSON. Validate:
- All required fields present:
,decision_type
,confidence
,missing_information
,recommended
,alternativesprior_decision_conflicts
has:recommended
,name
,rationale
,evidence
,risks
,kill_criteriaconstraint_fit- Each alternative has:
,name
,constraint_fit
,pros
,conswould_recommend_if
objects have:constraint_fit
,pattern_alignment
,scope_fit
,reversibilityintegration_risk
is one of:confidence
,high
,mediumlow
On validation failure: Retry once with the validation errors as feedback. On second failure, return:
{ "error": "Evaluator produced invalid output after retry", "raw_output": "..." }
Phase 4: Return Report
Return the validated Assay Report to the caller.
Decision Type Adaptation
The evaluator adapts scoring weights based on decision type:
| Type | Primary Weight | Secondary Weight |
|---|---|---|
| Reversibility, constraint fit | Long-term cost, extensibility |
| Risk, phasing | Blast radius, team capacity |
| Evidence strength, testability | Explanation coverage, simplicity |
| Measurable improvement | Disruption cost, reversibility |
Output: Assay Report
{ "decision_type": "architecture", "confidence": "high", "missing_information": [], "recommended": { "name": "Event-driven via message bus", "rationale": "Aligns with existing src/events/bus.ts pattern...", "evidence": ["src/events/bus.ts:14 — existing event dispatch"], "risks": ["Adds async complexity to currently synchronous flow"], "kill_criteria": "Switch away if latency requirements exceed 50ms p99", "constraint_fit": { "pattern_alignment": "high", "scope_fit": "high", "reversibility": "two-way door", "integration_risk": "low" } }, "alternatives": [ { "name": "Direct service calls", "constraint_fit": { "pattern_alignment": "medium", "scope_fit": "high", "reversibility": "one-way door", "integration_risk": "medium" }, "pros": ["Simpler mental model", "Synchronous"], "cons": ["Tight coupling", "Requires shared deployment"], "would_recommend_if": "Latency is critical or team prefers simplicity" } ], "prior_decision_conflicts": [] }
Confidence Scoring
| Level | Criteria |
|---|---|
| One approach clearly dominates on all weighted dimensions |
| Two viable options with trade-offs that depend on priority |
| Need more information — lists what would help |
Evidence Grounding
Every recommendation must cite specific evidence from the context:
- File:line references from recon briefs
- Specific pattern names from the codebase
- Concrete constraint violations or alignments
"This is the industry standard approach" is NOT evidence. "This aligns with how
src/api/routes/users.ts already handles it" IS evidence.
Without a recon brief, evidence cites the caller's context. Confidence scores skew lower.
Kill Criteria
on recommended approach: condition that would flip the recommendationkill_criteria
on each alternative: condition that would make it the recommendationwould_recommend_if
These make decisions revisitable without re-running the full analysis.
Error Handling
| Failure | Behavior |
|---|---|
Missing or | Return error immediately — no dispatch |
| Evaluator returns invalid JSON | Retry once with validation errors. Second failure returns |
| Evaluator timeout | Return |
Invalid | Warn and default to |
has fewer than 2 entries | Ignore provided approaches, let evaluator generate candidates |
Integration
Called by
| Skill | Decision Type | Context Source | Approaches |
|---|---|---|---|
| | Recon brief + cascading decisions | Evaluator generates |
| | Recon brief + cascading decisions (autonomous — confidence routing) | Evaluator generates |
| | Recon brief + migration analysis | Evaluator generates |
Not called by (investigated, not a fit):
/debugging (hypothesis evaluation uses quality-gate, not assay), /prospector (competing design evaluation is more sophisticated than assay for this use case). See #147 for rationale.
Consumer Dispatch Examples
From
:/design
/assay question: "How should components communicate in the new auth module?" context: { recon brief with project_structure, existing_patterns } decision_type: "architecture" cascading_decisions: [{ decision: "Using Redis for session store", reasoning: "..." }]
From
:/spec
/assay question: "How should the auth middleware handle token refresh?" context: { recon brief + investigation findings } decision_type: "architecture" cascading_decisions: [{ decision: "Using Redis for session store", reasoning: "..." }]
Spec consumes assay output autonomously: high confidence = accept, medium = terminal alert, low = block alert.
From
:/migrate
/assay question: "What migration strategy minimizes risk for the React 18→19 upgrade?" context: { recon brief + migration_target: "React 19", breaking_changes: [...] } decision_type: "strategy"
Standalone Usage
/assay question: "Should we use PostgreSQL or SQLite for this project?" context: "Small team, <10K users, read-heavy workload, deployed on single server"
Dispatches
- Evaluator agent (Opus) via
skills/assay/assay-evaluator-prompt.md
Does NOT
- Investigate the codebase (that's
)/recon - Challenge prior decisions (that's
's Challenger agent)/design - Make the decision for the user (it recommends; the caller decides)
- Iterate or loop (one dispatch, one report)