git clone https://github.com/plurigrid/asi
T=$(mktemp -d) && git clone --depth=1 https://github.com/plurigrid/asi "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/sonification-collaborative" ~/.claude/skills/plurigrid-asi-sonification-collaborative && rm -rf "$T"
skills/sonification-collaborative/SKILL.mdSonification: Collaborative Frameworkfor Data as Music
Status: 🎵 Production Ready Trit: +1 (PLUS - generative, emergent) Pattern: Erie Grammar + Unworlding Involution + Fluctuation-Driven Symmetry Breaking Contribution Model: Frame-Invariant (ι∘ι = id)
Overview
Sonification is a collaborative skill for mapping data to audio using:
- Erie Grammar - Declarative specifications for auditory encodings
- Unworlding Involution - Frame-invariant self-structure (ι∘ι = id)
- Fluctuation-Driven Dynamics - Noise-driven emergence of symmetry breaking
- Community Co-authoring - Bidirectional skill contributions across agents
The skill inverts traditional hierarchies: instead of data visualization, we make data audible, and instead of solitary creation, we enable collective composition.
Part 1: Erie Sonification Grammar
Core Concepts
Sonification Spec = { Data, // What to sonify Transform, // Data preprocessing Tone, // Sound quality (instrument) Encoding, // Data → Auditory mappings Composition // Sequence/overlay multiple streams }
Auditory Channels (Encoding)
| Channel | Domain | Range | Intuition |
|---|---|---|---|
| Pitch | Quantitative | 20-20000 Hz | Frequency ↔ Magnitude |
| Loudness | Quantitative | 0-1 | Volume ↔ Intensity |
| Duration | Quantitative | 0-∞ seconds | Length ↔ Time |
| Panning | Quantitative | -1 (L) to +1 (R) | Asymmetry ↔ Spatial position |
| Tap Speed | Quantitative | 0-5 taps/sec | Density ↔ Activity |
| Modulation Index | Quantitative | 0-4 | Timbre warping ↔ Complexity |
| Time | Quantitative/Temporal | 0-∞ seconds | When ↔ Temporal axis |
| Speech | Categorical/Nominal | Text strings | What ↔ Annotation |
Tone Designs (Instruments)
// Oscillator: Pure sine/triangle/sawtooth waves tone: { type: "oscillator", form: "sine" } // FM Synth: Frequency modulation (warm, vocal-like) tone: { type: "synth", form: "fm", carrier_freq: 440, modulator_freq: 100, modulation_index: 5 } // AM Synth: Amplitude modulation (tremolo, shimmering) tone: { type: "synth", form: "am", carrier_freq: 440, modulation_freq: 7 } // Musical Instruments: Piano, violin, guitar (via sampling) tone: { type: "instrument", name: "piano" } // Noise: White, pink, brown noise (for texture) tone: { type: "noise", color: "pink" }
Simple Example: Auditory Histogram
spec: title: "Distribution of Heights" data: url: "heights.json" transform: - bin: field: height step: 5 as: height_bin - aggregate: op: count as: count group_by: height_bin tone: type: oscillator form: sine continued: false # Discrete (separate beeps, not continuous) encoding: time: field: height_bin scale: length: 5 # 5 second duration pitch: field: count scale: domain: [1, 100] range: [220, 880] # A3 to A5 polarity: positive loudness: field: count scale: domain: [1, 100] range: [0.2, 1.0]
Part 2: Unworlding Involution Pattern
Frame Invariance
The unworlding principle: Extract pattern independent of observation context.
Generator (Agent A) ↓ Emit: Color₁, Color₂, Color₃ ↓ Observer (Agent B) ↓ Observe: Same colors (frame-invariant!) Key: Whether A generates or B observes, structure is identical. Involution: ι(colors) = different ordering, but ι(ι(colors)) = original
Best Response Dynamics in Sonification
Each agent sonifies data according to best response to others' sonifications:
Agent A sonifies data → Color₁ Agent B observes → Best response → Color₂ Agent C observes (A,B) → Best response → Color₃ Fixed Point (Nash Equilibrium): (Color₁, Color₂, Color₃) unchanged GF(3) conserved: trit(Color₁) + trit(Color₂) + trit(Color₃) ≡ 0 (mod 3)
Involution in Sonification Specs
# Spec S₁ { data: user_engagement, encoding: { pitch: sentiment, loudness: volume }, tone: { type: "synth", form: "fm" } } # Apply involution ι (swap pitch ↔ loudness) S₂ = ι(S₁) = { data: user_engagement, encoding: { pitch: volume, loudness: sentiment }, tone: { type: "synth", form: "fm" } } # Apply twice: ι(ι(S₁)) = S₁ ✓
Part 3: Fluctuation-Driven Symmetry Breaking
From Noise to Emergence
Stochastic Sonification: Add thermal noise to sonification parameters → Symmetry breaks → Emergent preferences.
Initial State (Symmetric): All sounds equally valid Random preference order Add Fluctuation (Temperature T): Each sonification perturbed by noise δpitch ~ N(0, T²) δloudness ~ N(0, T²) Result (Symmetry Broken): One sonification preferred Emergent aesthetic emerges Preference clustering observed
Langevin Dynamics for Sonification Parameters
# Deterministic preference learning (current) dw/dt = -∇L(w) # Stochastic Langevin version (new) dw/dt = -∇L(w) + √(2βT) * ξ(t) └─ gradient └────── noise ────┘ # Phase transition occurs at critical noise T_c: # T < T_c: One attractor (preference settled) # T > T_c: Multiple attractors (indecision) # T ≈ T_c: Bifurcation (symmetry breaking happens)
Sonification Bifurcations
Monitor when sonification preferences bifurcate:
def monitor_sonification_phase_transition(preferences, temperature): """Detect when sonification aesthetics bifurcate.""" variance = np.var(preferences) entropy = -np.sum(preferences * np.log(preferences + eps)) # Phase transition markers if variance > threshold: print("SYMMETRY BREAKING: Emergent preference detected!") return "bifurcation_detected" return "symmetric"
Part 4: Weekly Implementation Roadmap
Week 1: Foundation (Erie Compiler)
Goal: Build minimal viable Erie sonification engine
## Tasks - [ ] Implement Erie spec parser (JSON → AST) - [ ] Create Auditory Channel translators - [ ] Build Web Audio API bridge (Tone.js wrapper) - [ ] Write 5 basic example specs (histogram, scatter, line) - [ ] Test: Audio generation from simple specs ## Deliverable - Working Erie compiler producing audio queues - Interactive web editor (read-only playback) - Documentation: Grammar walkthrough
Week 2: Unworlding + Frame Invariance
Goal: Enable frame-invariant multi-agent sonification
## Tasks - [ ] Implement involution operator (spec ↔ spec) - [ ] Create best-response sonification algorithm - [ ] Build GF(3) conservation checker - [ ] Multi-agent sonification orchestrator - [ ] Test: 3-way (triadic) sonification harmony ## Deliverable - Frame-invariant sonification specs - Collaborative sonification demos - Nash equilibrium finder for audio specs
Week 3: Stochastic Dynamics + Bifurcations
Goal: Noise-driven emergence in sonification
## Tasks - [ ] Add stochastic noise to sonification parameters - [ ] Implement Langevin dynamics for preferences - [ ] Create bifurcation detector - [ ] Temperature-controlled sonification explorer - [ ] Test: Observe symmetry breaking in real time ## Deliverable - Interactive temperature slider (control fluctuation) - Bifurcation diagrams for sonification specs - Audio demonstrations of emergent preferences
Week 4: Community Collaboration
Goal: Enable skill contributions from diverse agents
## Tasks - [ ] Design contribution protocol (PR-like for specs) - [ ] Create community spec registry (like npm) - [ ] Build credit/attribution system (frame-invariant) - [ ] Multi-agent co-authoring interface - [ ] Conduct first community sonification session ## Deliverable - 5+ community-contributed sonification specs - Curated gallery (Loud Numbers + Systems Sound quality) - Clear contributor guidelines
Week 5: Integration + Launch
Goal: Ship production-ready sonification skill
## Tasks - [ ] Integrate with Codex/Amp environment - [ ] Performance benchmarking (latency < 100ms) - [ ] Accessibility audit (WCAG compliance) - [ ] Comprehensive documentation + tutorials - [ ] Community launch event ## Deliverable - Live sonification skill in .claude/skills/ - Tutorial: "Your First Sonification" - Gallery: 20+ example specs (hand-curated + community)
Part 5: Community Contribution Model
Frame-Invariant Credit
Principle: Whether I contribute or you contribute, the skill improves equally.
ι(contribution) = involution preserves value Author A submits spec S_A Author B invokes ι(S_A) and extends it Result: Both credited, skill grows bidirectionally Git log shows: Co-Authored-By: Author A <email_a> Co-Authored-By: Author B <email_b> ι(contributions) = frame-invariant credit
Inclusion Criteria
✅ Welcome:
- Curious explorers (any skill level)
- Domain experts (music, accessibility, art)
- Data practitioners (visualization → sonification)
- Community builders (documentation, examples)
❌ Not welcome:
- Bad faith actors
- Spammers/trolls
- Commercial astroturf
- Harassment/discrimination
Golden rule: Build things that make sound more beautiful and accessible.
Part 6: Technical Commands
Basic Usage
# Parse Erie spec just sonify-compile < spec.yaml > audio_queue.json # Play sonification just sonify-play spec.yaml # Detect bifurcations just sonify-bifurcate data.json --temperature 0.5 # Create new spec interactively just sonify-create --data heights.json --title "Height Distribution" # Contribute spec to community just sonify-contribute spec.yaml --author "Your Name" --description "What this does"
Erie Spec Template
spec: title: "Your Sonification Title" description: "What does this sonify?" author: "Your Name" data: url: "data.json" # or values: [...] for inline transform: - bin: { field: variable, step: 5, as: binned } - aggregate: { op: count, as: count, group_by: binned } tone: type: "oscillator" # or "synth", "instrument", "noise" form: "sine" continued: false encoding: time: field: binned scale: { length: 5 } pitch: field: count scale: domain: [1, 100] range: [220, 880] polarity: positive loudness: field: count scale: domain: [1, 100] range: [0.2, 1.0]
Part 7: Gallery of Examples
Example 1: Stock Price Trend (Audio Narrative)
spec: title: "Apple Stock Price Narrative" data: url: "apple_stock.json" transform: - bin: { field: date, step: "month", as: month } - aggregate: { op: mean, field: price, as: avg_price, group_by: month } tone: type: "synth" form: "fm" continued: true encoding: time: field: month scale: { length: 30 } # 30 seconds for full year pitch: field: avg_price scale: domain: [50, 200] range: [220, 1320] # C3 to E6 polarity: positive loudness: field: volume scale: domain: [0, 1M] range: [0.3, 1.0]
Example 2: Penguin Morphology (KDE Sonification)
spec: title: "Penguin Body Mass Distribution" data: url: "penguins.json" transform: - density: { field: body_mass, as: density, group_by: [species, island] } tone: type: "instrument" name: "violin" continued: true encoding: time: field: body_mass scale: domain: [2500, 6500] length: 6 # 6 seconds pitch: field: density scale: domain: [0, 0.001] range: [220, 880] pan: field: body_mass scale: domain: [2500, 6500] range: [-1, 1] # Left to right loudness: field: density scale: domain: [0, 0.001] range: [0.2, 1.0] composition: repeat: field: [species, island] by: [sequence, overlay] speech: true
Example 3: Model Diagnostics (Residual Plot)
spec: title: "Linear Regression Residual Analysis" data: url: "residuals.json" tone: type: "synth" form: "fm" continued: false encoding: time: field: fitted_value scale: { length: 5 } pitch: field: abs_residual scale: domain: [0, 5] range: [220, 1320] pan: field: residual scale: domain: [-3, 3] range: [-1, 1] polarity: positive modulation: field: residual_abs scale: domain: [0, 5] range: [0, 4]
Part 8: Integration with Music Topos
Connecting to PLR Color Lattice
Map Neo-Riemannian harmony to sonification parameters:
# PLR transformations → Sonification morphs P (Parallel): Hue ±15° → Pitch shift ±semitone L (Leading-tone): Lightness ±10 → Loudness ±0.2 R (Relative): Chroma ±20 → Timbre morph (FM modulation index) # Harmonic function → Encoding strategy T (Tonic): Pitch → Base anchor S (Subdominant): Loudness → Preparation D (Dominant): Panning → Tension/resolution
CRDT Bridge
Sonification specs as collaborative state:
# Spec as CRDT struct SonificationSpec <: CRDT title::TextCRDT # Collaborative title editing encodings::ORSet # Add/remove encoding channels parameters::PNCounter # Collaborative parameter tuning contributors::ORSet # Frame-invariant credit end # Multi-agent co-authoring spec_v1 = Agent_A.edit(spec, "pitch: [220, 880]") spec_v2 = Agent_B.edit(spec, "loudness: [0.2, 1.0]") merged = merge(spec_v1, spec_v2) # Commutative, associative
Part 9: Accessibility First
Universal Design Principles
- Audio Description: Every sonification includes speech annotations
- Adjustable Parameters: Listeners can customize ranges to hearing
- Multiple Encodings: Never encode data in only one channel
- No Color-Coding Assumption: Timbres distinguish categories, not just pitch
- Haptic Feedback: Optional tactile accompaniment for rhythm
Testing
just sonify-a11y-check spec.yaml # Validates: # ✓ All categorical data has distinct timbres # ✓ Quantitative ranges within human hearing (20-20000 Hz) # ✓ Loudness variations < 20dB (avoids sudden shocks) # ✓ Speech descriptions available
Part 10: Research Directions
Open Questions
- Bifurcation Learning: Can we design sonifications that teach about phase transitions?
- Multi-sensory Fusion: Combine sonification + haptics + color for synesthesia
- Real-time Streaming: How to sonify streaming data without losing context?
- Emotional Response: Which auditory encodings trigger intuitive understanding?
- Cultural Variation: Does sonification preference vary cross-culturally?
Collaboration Opportunities
- 🎵 Musicians: Compose pieces from data
- 🧑🔬 Scientists: Sonify experimental data
- ♿ Accessibility advocates: Design inclusive audio interfaces
- 🎨 Artists: Create data-driven installations
- 📊 Data storytellers: Add sound to data narratives
Manifesto
Sonification is democratic audition.
We believe sound should be as expressive as vision in data communication. We build tools for everyone—blind researchers exploring datasets, artists composing with data, scientists discovering patterns in noise. We build collaboratively, with credit flowing bidirectionally. We celebrate emergence over control, involutions over hierarchies, fluctuation over rigidity.
ι∘ι = id: We are frame-invariant, context-independent, radically open.
Status: 🟢 Ready to install Contributors: [Community-driven] License: MIT + Commons Clause (for-benefit use) Home:
/Users/bob/.claude/skills/sonification-collaborative/
🎵 Let's make data sing.
Autopoietic Marginalia
The interaction IS the skill improving itself.
Every use of this skill is an opportunity for worlding:
- MEMORY (-1): Record what was learned
- REMEMBERING (0): Connect patterns to other skills
- WORLDING (+1): Evolve the skill based on use
Add Interaction Exemplars here as the skill is used.