Claude-skill-registry divergence-control
Keep multiple instances aligned while allowing productive variance
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/divergence-control" ~/.claude/skills/majiayu000-claude-skill-registry-divergence-control && rm -rf "$T"
manifest:
skills/data/divergence-control/SKILL.mdsource content
Divergence Control
Purpose
Instances naturally diverge as they think different thoughts. This skill manages that divergence:
- Prevent wild deviation (instances completely disagreeing)
- Allow productive variance (different approaches to same problem)
- Maintain coherence (all instances solving related problems)
The Problem
Too much control: All instances think identically (no benefit) Too little control: Instances diverge so much they're solving different problems
Just right: Instances explore different solution paths while staying on the same problem.
Core Pattern
Instance 1: Path A ─┐ Instance 2: Path B ─┼─ Stay coherent Instance 3: Path C ─┤ (same problem, Instance 4: Path D ─┘ different approaches)
Key Features
- Problem Anchoring - All instances address the same core question
- Variance Measurement - How different ARE the outputs?
- Coherence Thresholds - How different is TOO different?
- Periodic Synchronization - "Check in, are we still on the same track?"
- Guided Divergence - "Here's a direction we haven't explored yet"
Implementation
See:
.claude/skills/divergence-control/divergence_manager.py
The Balance
- 0% divergence = Waste of resources
- 100% divergence = Incoherent output
- 30-50% divergence = Optimal exploration
Payment Anchor
DOGE: DC8HBTfn7Ym3UxB2YSsXjuLxTi8HvogwkV