Mycelium assumption-test

Design the smallest viable test to validate or invalidate a critical assumption. Based on Torres's assumption testing framework.

install
source · Clone the upstream repo
git clone https://github.com/haabe/mycelium
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/haabe/mycelium "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/assumption-test" ~/.claude/skills/haabe-mycelium-assumption-test && rm -rf "$T"
manifest: .claude/skills/assumption-test/SKILL.md
source content

Assumption Testing

Every solution rests on assumptions. Test the riskiest ones first with the lightest method possible.

Assumption Types (Torres / Cagan)

TypeQuestionExample
DesirabilityWill users want this?"Users will switch from current tool"
UsabilityCan users figure it out?"Users can complete onboarding in < 5 min"
FeasibilityCan we build this?"We can process 10K requests/sec"
ViabilityShould we build this?"Unit economics work at scale"
EthicalShould we build this? (morally)"This doesn't exploit user vulnerabilities"

Step 1: Map Assumptions

For the target solution, list ALL assumptions. Be honest -- most "obvious" things are actually assumptions.

Step 2: Prioritize (2x2 Matrix)

Plot each assumption on:

  • X-axis: How much evidence do we have? (low to high)
  • Y-axis: How important is this to the solution's success? (low to high)

Test first: High importance + Low evidence (top-left quadrant)

Step 3: Choose the Lightest Test

Test TypeEffortSignal QualityWhen to Use
Data miningHoursVariableYou have existing behavioral data
One-question surveyHoursLow-MediumQuick pulse on a specific question
Smoke/fake door testDaysMediumTest demand before building
Concierge testDaysHighManually deliver the service
Wizard of OzDaysHighFake the backend, real frontend
Prototype test1-2 weeksHighTest usability with interactive mockup
A/B test2+ weeksVery HighTest with real users at scale

Rule: Always pick the LIGHTEST test that produces meaningful signal. Don't build a prototype when a survey would suffice.

Step 4: Define Success Criteria

Before running the test, write:

  • Hypothesis (Gothelf Lean UX format): "We believe that [doing this/building this feature] for [these people] will achieve [this outcome]. We will know we are right when we see [this measurable signal]." The fourth clause ("we will know when") is critical — it defines success criteria upfront. Source: Gothelf & Seiden, Lean UX (2013, 3rd ed. 2021). The 4-part format evolved across editions.
  • Method: Which test type and how
  • Success looks like: Specific, measurable outcome (e.g., ">60% of survey respondents say X")
  • Failure looks like: What would make us abandon this assumption
  • Sample size: How many data points needed for confidence

Step 5: State Your Prediction (before running)

Before running the test, write down what you expect will happen and why. This forces scientific thinking — if you can't state a prediction, you don't understand the assumption well enough to test it.

  • I expect: [specific outcome, e.g., "4 of 6 users will complete onboarding in under 5 minutes"]
  • Because: [reasoning, e.g., "the flow has only 3 steps and uses familiar patterns"]
  • I'd be surprised if: [what would challenge your mental model]

After running, compare prediction to reality. The gap between prediction and outcome IS the learning.

Source: Rother (Toyota Kata) — stating predictions before experiments is the core scientific thinking habit.

Step 6: Run and Interpret

  • Run the test
  • Compare results to your prediction from Step 5 — note where reality differed
  • Record raw results
  • Update confidence level (0.1 -> 0.9, adapted from Gilad's Confidence Meter)
  • Update ICE score for the solution
  • If assumption validated: move to next riskiest assumption. Update confidence in the relevant canvas entry (opportunities.yml, diamonds/active.yml) to reflect the validated assumption — typically +0.1 to +0.15. If the validated assumption originated from a stakeholder interview (
    source_class: internal_stakeholder
    with
    validated: false
    ): set
    validated: true
    in the provenance block. This resolves the organizational mythology flag (Brown) — the stakeholder belief is now confirmed by external evidence.
  • If assumption invalidated: pivot the solution or explore alternatives. Decrease confidence by 0.1-0.2 to reflect the failed assumption. If the invalidated assumption was a stakeholder belief: update the canvas entry to reflect reality, not the stakeholder's original claim. Note the divergence in the decision log — the gap between belief and reality is a learning.
  • Log in canvas/opportunities.yml under the solution's experiments
  • Always update diamonds/active.yml confidence to match the test outcome

Bias Warning

Before interpreting results, run

/bias-check
:

  • Confirmation bias: Are you seeing what you want to see?
  • Small sample: Is n large enough to be meaningful?
  • Selection bias: Did you test with representative users?