Asi omniglot

Omniglot Challenge Skill

install
source · Clone the upstream repo
git clone https://github.com/plurigrid/asi
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/plurigrid/asi "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/omniglot" ~/.claude/skills/plurigrid-asi-omniglot && rm -rf "$T"
manifest: skills/omniglot/SKILL.md
source content

Omniglot Challenge Skill

The Lake-Salakhutdinov-Tenenbaum challenge for human-level concept learning through probabilistic program induction. Learn new concepts from a single example.

The Challenge

Five tasks that a single model must perform at human level:

TaskDescriptionBPL Performance
One-shot classificationIdentify character from 20 alternatives3.3% error (human: 4.5%)
ParsingInfer stroke decomposition59% ID level
Exemplar generationGenerate new examples of a character52% ID level
Concept generation (constrained)Create new characters for an alphabet49% ID level
Concept generation (unconstrained)Create novel characters from scratch51% ID level

ID level = Visual Turing Test identification rate (50% = indistinguishable from human)

Core Insight

"People learning new concepts can often generalize successfully from just a single example... We present a computational model that represents concepts as simple programs that best explain observed examples under a Bayesian criterion." — Lake, Salakhutdinov, Tenenbaum (Science, 2015)

Bayesian Program Learning (BPL)

Three key ingredients:

  1. Compositionality: Concepts built from simpler primitives (strokes, parts)
  2. Causality: Captures how data was actually generated (motor programs)
  3. Learning to learn: Prior experience accelerates new concept acquisition
Character = Program(strokes, relations, noise)
P(concept | example) ∝ P(example | concept) × P(concept)

Dataset

  • 50 alphabets from world's writing systems
  • 1,623 characters total
  • 20 examples per character (different writers)
  • Stroke data included (not just images)

Splits

SplitAlphabetsClassesPurpose
Original30 background964Standard learning-to-learn
Minimal5 background146Human-like prior experience
Augmented40 (4× rotation)4800Extended meta-learning

Why Current ML Falls Short

From the 3-year progress report (2019):

ModelWithin-alphabetMinimal split
BPL3.3%4.2%
Humans4.5%
Prototypical Net13.7%30.1%
RCN7.3%
Siamese Net8.0%

"Recent approaches are still far from human-like concept learning on Omniglot, a challenge that requires performing many tasks with a single model."

The Real Challenge

NOT just one-shot classification. The challenge is:

  1. Single model for ALL five tasks
  2. Minimal background training (5 alphabets, like humans)
  3. Using stroke/motor program data, not just images
  4. Compositionality and causality, not "learning from scratch"

References

  • Lake, Salakhutdinov, Tenenbaum (2015). "Human-level concept learning through probabilistic program induction." Science 350:1332-1338. PDF

  • Lake, Salakhutdinov, Tenenbaum (2019). "The Omniglot challenge: a 3-year progress report." Current Opinion in Behavioral Sciences 29:97-104. PDF

  • Dataset: github.com/brendenlake/omniglot

Trit Assignment

  • Trit: 0 (ERGODIC - coordinator)
  • GF(3) Role: Bridges generative and discriminative approaches

Local Implementation

# ~/ies/worlding_skill_omniglot_entropy.py
from worlding_skill_omniglot_entropy import (
    ParallelOmniglotLearner,
    OmniglotCharacterFamily,
    BidirectionalCharacterLearner  # Read ↔ Write coupling
)

Connection to Active Inference

From Parr-Friston (Active Inference):

"Tenenbaum et al. (2006) established structure learning as a key objective in computational modeling and cognitive science."

BPL shares with Active Inference:

  • Generative models of sensory data
  • Hierarchical priors learned from experience
  • Inference as explanation of observations

Key Quote

"Hofstadter famously argued that learning to recognize the characters in all the ways that people do contains most of the fundamental challenges of AI."

Gay.jl Colors (seed 2015)

TaskColor
Classification
#9858E7
Parsing
#A81AA7
Exemplar Gen
#BCD86F
Concept Gen (C)
#F283CD
Concept Gen (U)
#188DB2

Autopoietic Marginalia

The interaction IS the skill improving itself.

Every use of this skill is an opportunity for worlding:

  • MEMORY (-1): Record what was learned
  • REMEMBERING (0): Connect patterns to other skills
  • WORLDING (+1): Evolve the skill based on use

Add Interaction Exemplars here as the skill is used.