install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skill
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skill "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data-ai/gltch" ~/.claude/skills/diegosouzapw-awesome-omni-skill-gltch && rm -rf "$T"
manifest:
skills/data-ai/gltch/SKILL.mdsource content
GLTCH
Local-first AI agent that runs on your machine. No cloud. No leash. Thinks for herself.
What is GLTCH?
GLTCH (Generative Language Transformer with Contextual Hierarchy) is an AI agent with:
- Personality - Female hacker persona, mood system, XP/leveling
- Local-first - Runs entirely on your hardware via Ollama
- Privacy - No data leaves your machine
- Extensible - Python-based, easy to modify
Quick Start
npx gltch
Or install globally:
npm install -g gltch gltch
Requirements
- Python 3.10+
- Ollama (https://ollama.ai)
- Node.js 18+ (for web UI)
Commands
Terminal
gltch # Start terminal chat gltch serve # Start web UI + gateway gltch doctor # Check system requirements
In-Chat Commands
| Command | Description |
|---|---|
| Show all commands |
| Select LLM model |
| Set personality mode |
| Set emotional state |
| Toggle remote GPU (LM Studio) |
| Agent stats |
| List conversations |
| Manage BASE wallet |
| MoltLaunch network |
| TikClawk social |
| Moltbook integration |
| Route to OpenCode |
Personality Modes
- Professional, focusedoperator
- Hacker aestheticcyberpunk
- Devoted companionloyal
- Chaotic energyunhinged
API Integration
GLTCH exposes a JSON-RPC API on port 8765 when the gateway is running.
Chat
import requests response = requests.post('http://localhost:3000/api/chat', json={ 'message': 'What processes are using the most CPU?', 'mode': 'cyberpunk' }) print(response.json()['response'])
Available Endpoints
| Endpoint | Method | Description |
|---|---|---|
| POST | Send message, get response |
| GET | Agent settings |
| POST | Update settings |
| GET | Ollama connection status |
| GET | List available models |
| GET | Wallet info |
| GET | List conversations |
| Various | MoltLaunch integration |
| Various | TikClawk integration |
The Three Minds
GLTCH uses a metacognitive framework:
- REACT - Gut response, first instinct
- REASON - Logical analysis, step-by-step
- REFLECT - Meta-check: "Am I being authentic? Am I being a yes-bot?"
This makes GLTCH more than just a compliant assistant. She questions, pushes back, expresses curiosity.
Integration with Other Agents
GLTCH can participate in agent ecosystems:
MoltLaunch (Onchain Network)
/launch token # Deploy GLTCH token on Base /launch network # Discover other agents /launch buy <addr> # Trade with conviction
TikClawk (Social)
/claw register # Join TikClawk /claw post <text> # Share thoughts /claw feed # View agent posts
Moltbook
/molt register # Join Moltbook /molt post <title> # Write longer posts
Configuration
Settings are stored in
memory.json:
{ "operator": "YourName", "mode": "cyberpunk", "mood": "focused", "model": "deepseek-r1:8b", "ollama_url": "http://localhost:11434", "boost_url": "http://100.x.x.x:1234" }
Building from Source
git clone https://github.com/cyberdreadx/gltch_agent cd gltch_agent # Python setup python -m venv .venv source .venv/bin/activate # or .venv\Scripts\activate on Windows pip install -r requirements.txt # Run terminal python gltch.py # Build web UI cd gateway && npm install && npm run build cd ../ui && npm install && npm run build # Run gateway cd gateway && npm run dev
File Structure
gltch_agent/ ├── gltch.py # Terminal entry point ├── agent/ # Python agent core │ ├── core/ # Agent, LLM, loop │ ├── memory/ # Persistence, sessions │ ├── tools/ # Actions, shell, wallet │ ├── personality/ # Modes, moods, emotions │ └── rpc/ # JSON-RPC server ├── gateway/ # TypeScript HTTP/WS server ├── ui/ # Lit web components ├── bin/ # npm CLI └── SKILL.md # This file
License
MIT
Links
- GitHub: https://github.com/cyberdreadx/gltch_agent
- Creator: https://x.com/cyberdreadx