Asi libghostty-recording

libghostty-vt Recording Skill ๐Ÿ“น

install
source ยท Clone the upstream repo
git clone https://github.com/plurigrid/asi
Claude Code ยท Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/plurigrid/asi "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/libghostty-recording" ~/.claude/skills/plurigrid-asi-libghostty-recording && rm -rf "$T"
manifest: skills/libghostty-recording/SKILL.md
source content

libghostty-vt Recording Skill ๐Ÿ“น

Trit: 0 (ERGODIC - Coordinator) GF(3) Triad:

asciinema (-1) โŠ— libghostty-recording (0) โŠ— vhs (+1) = 0

Overview

Record, stream, and replay libghostty-vt terminal sessions for documentation, debugging, and LLM training.

Recording Methods

1. Asciinema (Lightweight .cast)

# Record session
asciinema rec ~/recordings/session-$(date +%Y%m%d_%H%M%S).cast

# Auto-record all sessions (add to .zshrc)
asciinema rec --append ~/recordings/daily-$(date +%Y%m%d).cast

# Stream to server
asciinema rec -t "libghostty demo" https://asciinema.org

Pros: Compact, text-based, searchable, LLM-friendly
Cons: No video export

2. Charmbracelet VHS (GIF/Video)

# demo.tape
Output demo.gif
Set FontSize 14
Set Width 1200
Set Height 600
Set Theme "Ghostty"

Type "echo 'libghostty-vt recording'"
Enter
Sleep 500ms
Type "skill load omniglot"
Enter
Sleep 1s
vhs demo.tape

Pros: Produces shareable GIFs, scriptable
Cons: Larger files

3. libghostty-vt Native Hooks

// Hook into libghostty-vt stream
const recorder = ghostty_vt.Recorder.init(.{
    .output = "session.cast",
    .format = .asciinema_v2,
});

terminal.setOutputHook(recorder.hook);

CI Gate Controls (on the way IN)

Pre-Installation Validation

# .github/workflows/skill-gate.yml
name: Skill Installation Gate

on:
  pull_request:
    paths:
      - 'skills/**'
      - 'SKILL.md'

jobs:
  validate-skills:
    runs-on: ubuntu-latest
    
    steps:
      - uses: actions/checkout@v4
      
      - name: Validate GF(3) conservation
        run: |
          # Sum all trits, must equal 0 mod 3
          python3 -c "
          import json
          skills = json.load(open('skills.json'))
          total = sum(s.get('trit', 0) for s in skills)
          assert total % 3 == 0, f'GF(3) violation: sum={total}'
          print('โœ“ GF(3) conserved')
          "
          
      - name: Check SKILL.md structure
        run: |
          for f in skills/*/SKILL.md; do
            grep -q "^# " "$f" || (echo "Missing title: $f" && exit 1)
            grep -q "Trit" "$f" || (echo "Missing trit: $f" && exit 1)
          done
          echo "โœ“ All skills have required structure"
          
      - name: Verify no placeholder tokens
        run: |
          ! grep -rE "(TODO|FIXME|placeholder|mock-|pseudo-)" skills/ || \
            (echo "โŒ Placeholder tokens found" && exit 1)

Local Gate

# Validate before install
validate-skills() {
  local repo=$1
  gh api repos/$repo/contents/skills.json -q '.content' | \
    base64 -d | python3 -c "
import json, sys
skills = json.load(sys.stdin)
total = sum(s.get('trit', 0) for s in skills)
if total % 3 != 0:
    print(f'โŒ GF(3) violation: {total}')
    sys.exit(1)
print(f'โœ“ {len(skills)} skills, GF(3) conserved')
"
}

# Use before install
validate-skills plurigrid/asi && \
  npx ai-agent-skills install plurigrid/asi --agent codex

LLM Training from Recordings

From asciinema discourse (2024):

"My real interest is not so much in playing back the recordings but in using the

.cast
files for creating a vector database that I can then query and use an LLM to extract useful workflows."

Cast File โ†’ Vector DB

import json
import duckdb

def parse_cast(cast_file: str) -> list:
    """Extract commands and outputs from .cast file"""
    with open(cast_file) as f:
        lines = f.readlines()
    
    header = json.loads(lines[0])
    events = [json.loads(line) for line in lines[1:]]
    
    return [{
        "timestamp": e[0],
        "type": e[1],  # 'o' = output, 'i' = input
        "data": e[2]
    } for e in events]

# Store in DuckDB for querying
con = duckdb.connect("recordings.duckdb")
con.execute("""
    CREATE TABLE IF NOT EXISTS terminal_events (
        session_id VARCHAR,
        timestamp DOUBLE,
        event_type VARCHAR,
        data VARCHAR,
        embedding FLOAT[1024]
    )
""")

Integration with libghostty-ewig

From libghostty-ewig.jl:

# Connect libghostty-vt parsing to ewig modal editor
module LibghosttyEwig
    # VT escape sequence parsing
    # Gay.jl color integration
    # Modal editing state machine
end

Best Practices

  1. Daily auto-recording: Start asciinema on shell init
  2. Session naming:
    session-{date}_{project}_{task}.cast
  3. Compression: Cast files are JSON, gzip well
  4. Privacy: Filter secrets with
    asciinema rec --env=TERM
  5. Playback speed:
    asciinema play -s 2 session.cast

Files

PathPurpose
~/recordings/
Default recording directory
~/.config/asciinema/
Asciinema config
~/ies/ghostty-vt-src/
libghostty-vt source

References

490 Skills Installed โœ“

npx ai-agent-skills install plurigrid/asi --agent codex
Installed 490 skill(s) from plurigrid/asi

Autopoietic Marginalia

The interaction IS the skill improving itself.

Every use of this skill is an opportunity for worlding:

  • MEMORY (-1): Record what was learned
  • REMEMBERING (0): Connect patterns to other skills
  • WORLDING (+1): Evolve the skill based on use

Add Interaction Exemplars here as the skill is used.