install
source · Clone the upstream repo
git clone https://github.com/MacPhobos/research-mind
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/MacPhobos/research-mind "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/toolchains-ai-ops-local-llm-ops" ~/.claude/skills/macphobos-research-mind-toolchains-ai-ops-local-llm-ops && rm -rf "$T"
manifest:
.claude/skills/toolchains-ai-ops-local-llm-ops/skill.mdsource content
Local LLM Ops (Ollama)
Overview
Your
localLLM repo provides a full local LLM toolchain on Apple Silicon: setup scripts, a rich CLI chat launcher, benchmarks, and diagnostics. The operational path is: install Ollama, ensure the service is running, initialize the venv, pull models, then launch chat or benchmarks.
Quick Start
./setup_chatbot.sh ./chatllm
If no models are present:
ollama pull mistral
Setup Checklist
- Install Ollama:
brew install ollama - Start the service:
brew services start ollama - Run setup:
./setup_chatbot.sh - Verify service:
curl http://localhost:11434/api/version
Chat Launchers
(primary launcher)./chatllm
or./chat
(alternate launchers)./chat.py- Aliases:
then./install_aliases.sh
,llm
,llm-codellm-fast
Task modes:
./chat -t coding -m codellama:70b ./chat -t creative -m llama3.1:70b ./chat -t analytical
Benchmark Workflow
Benchmarks are scripted in
scripts/run_benchmarks.sh:
./scripts/run_benchmarks.sh
This runs
bench_ollama.py with:
benchmarks/prompts.yamlbenchmarks/models.yaml- Multiple runs and max token limits
Diagnostics
Run the built-in diagnostic script when setup fails:
./diagnose.sh
Common fixes:
- Re-run
./setup_chatbot.sh - Ensure
is in PATHollama - Pull at least one model:
ollama pull mistral
Operational Notes
- Virtualenv lives in
.venv - Chat configs and sessions live under
~/.localllm/ - Ollama API runs at
http://localhost:11434
Related Skills
toolchains/universal/infrastructure/docker