install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/adroidian/chitin-core" ~/.claude/skills/openclaw-skills-chitin-core && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/adroidian/chitin-core" ~/.openclaw/skills/openclaw-skills-chitin-core && rm -rf "$T"
manifest:
skills/adroidian/chitin-core/SKILL.mdsource content
chitin-core
Route tasks to the cheapest capable model. Never crash on rate limits.
Activation
When spawning sub-agents or delegating tasks, use ModelRouter to select the optimal model.
Trigger phrases: "route this", "spawn a sub-agent", "delegate", or any time you need to choose a model for a task.
Usage
Route a Task
node ~/.openclaw/workspace/skills/chitin-core/scripts/router.js route "task description here"
Returns JSON:
{"tier":"MEDIUM","model":"google-antigravity/gemini-3.1-pro","confidence":0.85,"estimatedCost":0.005,"signals":["codeSignals:2×1.2=2.4"]}
Use the returned
model value in sessions_spawn.
Handle Failures
If a spawned session fails with a rate limit or error:
node ~/.openclaw/workspace/skills/chitin-core/scripts/router.js fail "provider/model" "error message"
Then re-route — the failed model will be skipped:
node ~/.openclaw/workspace/skills/chitin-core/scripts/router.js route "same task"
Check Health
node ~/.openclaw/workspace/skills/chitin-core/scripts/router.js health
View Costs
node ~/.openclaw/workspace/skills/chitin-core/scripts/router.js costs
Validate Config
node ~/.openclaw/workspace/skills/chitin-core/scripts/router.js validate
Workflow
- Receive task from user
- Run
to get optimal modelrouter.js route "<task>"
with returned modelsessions_spawn- If spawn fails →
→ retry routerouter.js fail "<model>" "<error>" - Return result to user
Tiers
| Tier | Use Case | Models |
|---|---|---|
| LIGHT | Greetings, simple Q&A, status checks | Flash, DeepSeek, gpt-5-mini, Groq, Ollama |
| MEDIUM | Code, summaries, standard tasks | Gemini Pro, gpt-5.2, DeepSeek Reasoner |
| HEAVY | Architecture, complex reasoning, agentic | gpt-5.2-pro, o3, Codex |
Override Tags
Include in task text to force a tier:
— force cheapest model@light
— force mid-tier@medium
— force most capable@heavy
Graceful Degradation
If all models in a tier are rate-limited, the router automatically:
- Tries adjacent tiers (escalate or downgrade)
- Falls back to local Ollama if configured
- Returns structured error with retry time (never crashes)
Configuration
Edit
config.json in the skill directory to:
- Add/remove models per tier
- Adjust cost figures
- Tune classification boundaries