Claude-skill-registry-data minion-models
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry-data
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry-data "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/minion-models" ~/.claude/skills/majiayu000-claude-skill-registry-data-minion-models && rm -rf "$T"
manifest:
data/minion-models/SKILL.mdsource content
Minion Models
Manage Ollama models for your minion squad.
Quick commands
List installed models
ollama list
Pull a model
ollama pull qwen2.5-coder:1.5b
Check model info
ollama show qwen2.5-coder:1.5b
Remove a model
ollama rm qwen2.5-coder:0.5b
Presets
| Preset | Models | Download | RAM |
|---|---|---|---|
| nano | qwen2.5-coder:0.5b | ~350MB | ~1GB |
| small | qwen2.5-coder:1.5b | ~1GB | ~2GB |
| medium | qwen2.5-coder:7b | ~4.5GB | ~8GB |
| large | qwen2.5-coder:14b | ~9GB | ~16GB |
Pull preset models
nano:
ollama pull qwen2.5-coder:0.5b
small (recommended):
ollama pull qwen2.5-coder:1.5b
medium:
ollama pull qwen2.5-coder:7b
large:
ollama pull qwen2.5-coder:14b
Switch preset
Edit
llm_gc/config/models.yaml and change the preset line:
preset: small # Change to: nano, small, medium, or large
Or use sed:
sed -i.bak 's/^preset:.*/preset: medium/' llm_gc/config/models.yaml
Check disk usage
# Total Ollama storage du -sh ~/.ollama/models # Per-model breakdown ls -lh ~/.ollama/models/blobs/ | head -20
Recommended models
| Task | Model | Why |
|---|---|---|
| Quick patches | qwen2.5-coder:1.5b | Fast, good enough |
| Quality patches | qwen2.5-coder:7b | Better reasoning |
| Code review | qwen2.5-coder:7b+ | Needs context |
| Simple questions | qwen2.5-coder:0.5b | Speed matters |
Alternative models
# DeepSeek (alternative to Qwen) ollama pull deepseek-coder:1.3b ollama pull deepseek-coder:6.7b # CodeLlama (Meta) ollama pull codellama:7b # StarCoder ollama pull starcoder2:3b
Troubleshooting
Model not found:
ollama pull <model-name>
Slow responses:
- Try smaller model
- Check
for RAM pressurehtop - Reduce
in swarm--workers
Out of disk space:
# Remove unused models ollama rm <model-name> # Check what's installed ollama list
Model quality issues:
- Upgrade preset: nano → small → medium
- Add more context with
--read - Simplify the task