Asi neuro-symbolic-bridge

neuro-symbolic-bridge Skill

install
source · Clone the upstream repo
git clone https://github.com/plurigrid/asi
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/plurigrid/asi "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/neuro-symbolic-bridge" ~/.claude/skills/plurigrid-asi-neuro-symbolic-bridge && rm -rf "$T"
manifest: skills/neuro-symbolic-bridge/SKILL.md
source content

neuro-symbolic-bridge Skill

Bridge between symbolic (SICP, proofs) and subsymbolic (GFlowNets, neural) paradigms

The Gap

High tension pairs (d ≈ 1.85):

SYMBOLIC                               SUBSYMBOLIC
    │                                       │
 sicp                               gflownet
 proofgeneral-narya                 forward-forward-learning
 lispsyntax-acset                   sheaf-laplacian-coordination
 dialectica                         cognitive-superposition
    │                                       │
    └──────────── d ≈ 1.85 ─────────────────┘

Resolution: DisCoPy String Diagrams

DisCoPy provides the bridge via string diagrams that are:

  • Symbolic: Compositional, typed, algebraic
  • Subsymbolic: Tensor network evaluation, differentiable
SYMBOLIC (SICP)          DisCoPy              SUBSYMBOLIC (Neural)
    │                       │                        │
 S-expressions  ──→  String Diagrams  ──→  Tensor Contractions
 Pure functions      Monoidal categories   Differentiable
 Type theory         Wiring diagrams       Gradient flow
    │                       │                        │
    └───────── d ≈ 0.9 ─────┴───────── d ≈ 0.9 ─────┘

Core Concept

String Diagrams as Interface

from discopy import rigid, tensor
from discopy.grammar import pregroup

# SYMBOLIC: Define types and morphisms compositionally
n = pregroup.Ty('n')  # noun type
s = pregroup.Ty('s')  # sentence type

# Words as morphisms (symbolic)
Alice = pregroup.Word('Alice', n)
loves = pregroup.Word('loves', n.r @ s @ n.l)
Bob = pregroup.Word('Bob', n)

# Compose symbolically
sentence = Alice @ loves @ Bob >> pregroup.Cup(n, n.r) @ s @ pregroup.Cup(n.l, n)

# SUBSYMBOLIC: Evaluate as tensor network
from discopy.quantum import Ket, Bra, CircuitFunctor

# Map types to vector spaces
F = CircuitFunctor(
    ob={n: 2, s: 4},  # noun=2-dim, sentence=4-dim
    ar={
        Alice: tensor.Tensor([1, 0]),  # One-hot
        Bob: tensor.Tensor([0, 1]),
        loves: tensor.Tensor.random(2, 4, 2),  # Learned tensor
    }
)

# Evaluate (differentiable!)
result = F(sentence)  # Tensor contraction

Implementation

Symbolic to Subsymbolic Translation

from typing import Dict, Any, Callable
from discopy import Diagram, Ty, Box
import torch
import torch.nn as nn

class NeuroSymbolicBridge:
    """
    Bridge between symbolic S-expressions and neural networks.
    
    Resolves the sicp ↔ gflownet tension by:
    1. Parsing S-expressions to DisCoPy diagrams (symbolic)
    2. Assigning neural network semantics (subsymbolic)
    3. Backprop through diagram evaluation
    """
    
    def __init__(self):
        self.type_dims: Dict[str, int] = {}
        self.box_networks: Dict[str, nn.Module] = {}
    
    def register_type(self, name: str, dim: int):
        """Register symbolic type with vector space dimension."""
        self.type_dims[name] = dim
    
    def register_function(self, name: str, network: nn.Module):
        """Register symbolic function with neural implementation."""
        self.box_networks[name] = network
    
    def parse_sexpr(self, sexpr: str) -> Diagram:
        """
        Parse S-expression to DisCoPy diagram.
        
        (f (g x) y) → f ∘ (g ⊗ id) ∘ ...
        """
        from sexpdata import loads
        tree = loads(sexpr)
        return self._tree_to_diagram(tree)
    
    def _tree_to_diagram(self, tree) -> Diagram:
        """Convert parse tree to diagram."""
        if isinstance(tree, str):
            # Atom: lookup or create generator
            return Box(tree, Ty(), Ty(tree))
        elif isinstance(tree, list):
            # Application: compose diagrams
            func = self._tree_to_diagram(tree[0])
            args = [self._tree_to_diagram(a) for a in tree[1:]]
            # Tensor args, then compose with function
            if args:
                tensored = args[0]
                for a in args[1:]:
                    tensored = tensored @ a
                return tensored >> func
            return func
        else:
            return Box(str(tree), Ty(), Ty('const'))
    
    def evaluate(self, diagram: Diagram) -> torch.Tensor:
        """
        Evaluate diagram as tensor network (differentiable).
        """
        # Create functor mapping boxes to tensors
        def tensor_for_box(box: Box) -> torch.Tensor:
            if box.name in self.box_networks:
                # Neural network semantics
                return self.box_networks[box.name]
            else:
                # Random initialization (learnable)
                in_dim = self._dim_of_type(box.dom)
                out_dim = self._dim_of_type(box.cod)
                return nn.Linear(in_dim, out_dim)
        
        # Contract tensors according to diagram structure
        return self._contract(diagram, tensor_for_box)
    
    def _dim_of_type(self, ty: Ty) -> int:
        """Compute dimension of type."""
        if len(ty) == 0:
            return 1
        return sum(self.type_dims.get(str(t), 1) for t in ty)
    
    def _contract(self, diagram: Diagram, 
                  tensor_fn: Callable[[Box], torch.Tensor]) -> torch.Tensor:
        """Contract diagram as tensor network."""
        # Simplified: use discopy's built-in evaluation
        from discopy.tensor import TensorFunctor
        
        F = TensorFunctor(
            ob={str(t): self.type_dims.get(str(t), 2) for t in diagram.dom @ diagram.cod},
            ar={box.name: tensor_fn(box) for box in diagram.boxes}
        )
        return F(diagram)

# Example: GFlowNet sampling as symbolic operation
class GFlowNetBox(nn.Module):
    """
    Neural box that samples proportionally to reward.
    Bridges symbolic composition with GFlowNet sampling.
    """
    
    def __init__(self, in_dim: int, out_dim: int, hidden: int = 64):
        super().__init__()
        self.flow = nn.Sequential(
            nn.Linear(in_dim, hidden),
            nn.ReLU(),
            nn.Linear(hidden, out_dim),
            nn.Softplus()  # Positive flows
        )
        self.partition = nn.Parameter(torch.ones(1))
    
    def forward(self, x: torch.Tensor) -> torch.Tensor:
        """Forward flow (for training)."""
        return self.flow(x)
    
    def sample(self, x: torch.Tensor) -> int:
        """Sample action proportional to flow."""
        flows = self.forward(x)
        probs = flows / flows.sum()
        return torch.multinomial(probs, 1).item()

Symbolic Proof → Neural Verifier

class ProofNeuralVerifier:
    """
    Bridge between Narya/Agda proofs and neural verification.
    
    Symbolic: Type derivation tree
    Subsymbolic: Neural network that learns proof patterns
    """
    
    def __init__(self, bridge: NeuroSymbolicBridge):
        self.bridge = bridge
        self.proof_encoder = nn.TransformerEncoder(
            nn.TransformerEncoderLayer(d_model=256, nhead=8),
            num_layers=6
        )
    
    def proof_to_diagram(self, proof_term: str) -> Diagram:
        """Convert proof term to DisCoPy diagram."""
        # Proof terms are S-expressions
        return self.bridge.parse_sexpr(proof_term)
    
    def embed_proof(self, proof: Diagram) -> torch.Tensor:
        """Embed proof structure as vector."""
        # Walk diagram, collect box embeddings
        embeddings = []
        for box in proof.boxes:
            if box.name in self.bridge.box_networks:
                embeddings.append(self.bridge.box_networks[box.name].weight)
        
        if embeddings:
            stacked = torch.stack(embeddings)
            return self.proof_encoder(stacked.unsqueeze(0))
        return torch.zeros(1, 256)
    
    def verify_neural(self, proof: Diagram, claim: Diagram) -> float:
        """Neural verification score (subsymbolic)."""
        proof_emb = self.embed_proof(proof)
        claim_emb = self.embed_proof(claim)
        
        # Cosine similarity as "proof correctness" score
        return torch.cosine_similarity(proof_emb, claim_emb, dim=-1).item()

Gay.jl Color Integration

BRIDGE_COLORS = {
    'symbolic': '#CF6971',    # Stream 3 (SICP red)
    'subsymbolic': '#63B6F0', # Stream 3 (neural blue)
    'bridge': '#89DF91',      # Stream 3 (DisCoPy green)
    'hybrid': '#E6F463',      # Stream 2 (integrated yellow)
}

def color_component(component_type: str) -> str:
    """Color based on paradigm."""
    if component_type in ['sexpr', 'proof', 'type']:
        return BRIDGE_COLORS['symbolic']
    elif component_type in ['tensor', 'network', 'gradient']:
        return BRIDGE_COLORS['subsymbolic']
    elif component_type in ['diagram', 'functor', 'discopy']:
        return BRIDGE_COLORS['bridge']
    else:
        return BRIDGE_COLORS['hybrid']

Triangle Inequality Restoration

With this bridge:

d(sicp, neuro-symbolic-bridge) ≈ 0.9
d(neuro-symbolic-bridge, gflownet) ≈ 0.9

Therefore:
d(sicp, gflownet) ≤ 0.9 + 0.9 = 1.8 ✓
(Original: 1.859, now satisfies triangle inequality)

Use Cases

1. Neural Program Synthesis

# Symbolic: program syntax
program_sexpr = "(lambda (x) (+ x 1))"
diagram = bridge.parse_sexpr(program_sexpr)

# Subsymbolic: learned semantics
output = bridge.evaluate(diagram)  # Differentiable!

# Train to match examples
loss = mse(output, target)
loss.backward()  # Gradients flow through symbolic structure

2. GFlowNet + Compositional Structure

# Symbolic composition of GFlowNet boxes
molecule_diagram = (
    Box('atom_C', Ty(), Ty('atom')) @ 
    Box('atom_O', Ty(), Ty('atom')) >>
    Box('bond', Ty('atom', 'atom'), Ty('molecule'))
)

# Each box is a GFlowNet that samples proportionally
bridge.register_function('atom_C', GFlowNetBox(1, 6))
bridge.register_function('atom_O', GFlowNetBox(1, 8))
bridge.register_function('bond', GFlowNetBox(14, 32))

# Sample molecule via symbolic structure
molecule = bridge.evaluate(molecule_diagram)

3. SICP Interpreter with Neural Components

# Hybrid evaluator
class HybridEvaluator:
    def eval(self, expr, env):
        if self.is_symbolic(expr):
            # Standard SICP evaluation
            return self.eval_symbolic(expr, env)
        elif self.is_neural(expr):
            # DisCoPy diagram evaluation
            return bridge.evaluate(expr)
        else:
            # Hybrid: symbolic structure, neural leaves
            diagram = bridge.parse_sexpr(str(expr))
            return bridge.evaluate(diagram)

Neighbor Skills

  • sicp: Pure symbolic computation
  • gflownet: Pure subsymbolic sampling
  • discopy: String diagrams library
  • lispsyntax-acset: S-expressions as data
  • forward-forward-learning: Local neural learning
  • proofgeneral-narya: Proof term manipulation

Resources


End-of-Skill Interface

Commands

# Parse S-expression to diagram
python neuro_symbolic_bridge.py --parse "(f (g x) y)"

# Evaluate with neural semantics
python neuro_symbolic_bridge.py --eval "(lambda (x) x)" --input "[1,2,3]"

# Train hybrid model
python neuro_symbolic_bridge.py --train examples.json --epochs 100

Autopoietic Marginalia

The interaction IS the skill improving itself.

Every use of this skill is an opportunity for worlding:

  • MEMORY (-1): Record what was learned
  • REMEMBERING (0): Connect patterns to other skills
  • WORLDING (+1): Evolve the skill based on use

Add Interaction Exemplars here as the skill is used.