Personal_AI_Infrastructure Science

Hypothesis-test-analyze cycles for systematic problem-solving — the meta-skill governing all others. Includes define goal, generate hypotheses, design experiment, measure results, analyze results, iterate, full cycle, quick diagnosis, and structured investigation. USE WHEN think about, figure out, try approaches, experiment with, iterate on, improve, optimize, define goal, generate hypotheses, design experiment, measure results, analyze results, full cycle, quick diagnosis, structured investigation, science, hypothesis.

install
source · Clone the upstream repo
git clone https://github.com/danielmiessler/Personal_AI_Infrastructure
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/danielmiessler/Personal_AI_Infrastructure "$T" && mkdir -p ~/.claude/skills && cp -r "$T/Releases/v4.0.0/.claude/skills/Thinking/Science" ~/.claude/skills/danielmiessler-personal-ai-infrastructure-science-25fd4d && rm -rf "$T"
manifest: Releases/v4.0.0/.claude/skills/Thinking/Science/SKILL.md
source content

Customization

Before executing, check for user customizations at:

~/.claude/PAI/USER/SKILLCUSTOMIZATIONS/Science/

If this directory exists, load and apply any PREFERENCES.md, configurations, or resources found there. These override default behavior. If the directory does not exist, proceed with skill defaults.

🚨 MANDATORY: Voice Notification (REQUIRED BEFORE ANY ACTION)

You MUST send this notification BEFORE doing anything else when this skill is invoked.

  1. Send voice notification:

    curl -s -X POST http://localhost:8888/notify \
      -H "Content-Type: application/json" \
      -d '{"message": "Running the WORKFLOWNAME workflow in the Science skill to ACTION"}' \
      > /dev/null 2>&1 &
    
  2. Output text notification:

    Running the **WorkflowName** workflow in the **Science** skill to ACTION...
    

This is not optional. Execute this curl command immediately upon skill invocation.

Science - The Universal Algorithm

The scientific method applied to everything. The meta-skill that governs all other skills.

The Universal Cycle

GOAL -----> What does success look like?
   |
OBSERVE --> What is the current state?
   |
HYPOTHESIZE -> What might work? (Generate MULTIPLE)
   |
EXPERIMENT -> Design and run the test
   |
MEASURE --> What happened? (Data collection)
   |
ANALYZE --> How does it compare to the goal?
   |
ITERATE --> Adjust hypothesis and repeat
   |
   +------> Back to HYPOTHESIZE

The goal is CRITICAL. Without clear success criteria, you cannot judge results.


Workflow Routing

Output when executing:

Running the **WorkflowName** workflow in the **Science** skill to ACTION...

Core Workflows

TriggerWorkflow
"define the goal", "what are we trying to achieve"
Workflows/DefineGoal.md
"what might work", "ideas", "hypotheses"
Workflows/GenerateHypotheses.md
"how do we test", "experiment design"
Workflows/DesignExperiment.md
"what happened", "measure", "results"
Workflows/MeasureResults.md
"analyze", "compare to goal"
Workflows/AnalyzeResults.md
"iterate", "try again", "next cycle"
Workflows/Iterate.md
Full structured cycle
Workflows/FullCycle.md

Diagnostic Workflows

TriggerWorkflow
Quick debugging (15-min rule)
Workflows/QuickDiagnosis.md
Complex investigation
Workflows/StructuredInvestigation.md

Resource Index

ResourceDescription
METHODOLOGY.md
Deep dive into each phase
Protocol.md
How skills implement Science
Templates.md
Goal, Hypothesis, Experiment, Results templates
Examples.md
Worked examples across scales

Domain Applications

DomainManifestationRelated Skill
CodingTDD (Red-Green-Refactor)Development
ProductsMVP -> Measure -> IterateDevelopment
ResearchQuestion -> Study -> AnalyzeResearch
PromptsPrompt -> Eval -> IterateEvals
DecisionsOptions -> Council -> ChooseCouncil

Scale of Application

LevelCycle TimeExample
MicroMinutesTDD: test, code, refactor
MesoHours-DaysFeature: spec, implement, validate
MacroWeeks-MonthsProduct: MVP, launch, measure PMF

Integration Points

PhaseSkills to Invoke
GoalCouncil for validation
ObserveResearch for context
HypothesizeCouncil for ideas, RedTeam for stress-test
ExperimentDevelopment (Worktrees) for parallel tests
MeasureEvals for structured measurement
AnalyzeCouncil for multi-perspective analysis

Key Principles (Quick Reference)

  1. Goal-First - Define success before starting
  2. Hypothesis Plurality - NEVER just one idea (minimum 3)
  3. Minimum Viable Experiments - Smallest test that teaches
  4. Falsifiability - Experiments must be able to fail
  5. Measure What Matters - Only goal-relevant data
  6. Honest Analysis - Compare to goal, not expectations
  7. Rapid Iteration - Cycle speed > perfect experiments

Anti-Patterns

BadGood
"Make it better""Reduce load time from 3s to 1s"
"I think X will work""Here are 3 approaches: X, Y, Z"
"Prove I'm right""Design test that could disprove"
"Pretend failure didn't happen""What did we learn?"
"Keep experimenting forever""Ship and learn from production"

Quick Start

  1. Goal - What does success look like?
  2. Observe - What do we know?
  3. Hypothesize - At least 3 ideas
  4. Experiment - Minimum viable tests
  5. Measure - Collect goal-relevant data
  6. Analyze - Compare to success criteria
  7. Iterate - Adjust and repeat

The answer emerges from the cycle, not from guessing.