Claude-skill-registry-data minion-models

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry-data
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry-data "$T" && mkdir -p ~/.claude/skills && cp -r "$T/data/minion-models" ~/.claude/skills/majiayu000-claude-skill-registry-data-minion-models && rm -rf "$T"
manifest: data/minion-models/SKILL.md
source content

Minion Models

Manage Ollama models for your minion squad.

Quick commands

List installed models

ollama list

Pull a model

ollama pull qwen2.5-coder:1.5b

Check model info

ollama show qwen2.5-coder:1.5b

Remove a model

ollama rm qwen2.5-coder:0.5b

Presets

PresetModelsDownloadRAM
nanoqwen2.5-coder:0.5b~350MB~1GB
smallqwen2.5-coder:1.5b~1GB~2GB
mediumqwen2.5-coder:7b~4.5GB~8GB
largeqwen2.5-coder:14b~9GB~16GB

Pull preset models

nano:

ollama pull qwen2.5-coder:0.5b

small (recommended):

ollama pull qwen2.5-coder:1.5b

medium:

ollama pull qwen2.5-coder:7b

large:

ollama pull qwen2.5-coder:14b

Switch preset

Edit

llm_gc/config/models.yaml
and change the preset line:

preset: small  # Change to: nano, small, medium, or large

Or use sed:

sed -i.bak 's/^preset:.*/preset: medium/' llm_gc/config/models.yaml

Check disk usage

# Total Ollama storage
du -sh ~/.ollama/models

# Per-model breakdown
ls -lh ~/.ollama/models/blobs/ | head -20

Recommended models

TaskModelWhy
Quick patchesqwen2.5-coder:1.5bFast, good enough
Quality patchesqwen2.5-coder:7bBetter reasoning
Code reviewqwen2.5-coder:7b+Needs context
Simple questionsqwen2.5-coder:0.5bSpeed matters

Alternative models

# DeepSeek (alternative to Qwen)
ollama pull deepseek-coder:1.3b
ollama pull deepseek-coder:6.7b

# CodeLlama (Meta)
ollama pull codellama:7b

# StarCoder
ollama pull starcoder2:3b

Troubleshooting

Model not found:

ollama pull <model-name>

Slow responses:

  • Try smaller model
  • Check
    htop
    for RAM pressure
  • Reduce
    --workers
    in swarm

Out of disk space:

# Remove unused models
ollama rm <model-name>

# Check what's installed
ollama list

Model quality issues:

  • Upgrade preset: nano → small → medium
  • Add more context with
    --read
  • Simplify the task