Nanoclaw add-ollama-tool
Add Ollama MCP server so the container agent can call local models and optionally manage the Ollama model library.
git clone https://github.com/qwibitai/nanoclaw
T=$(mktemp -d) && git clone --depth=1 https://github.com/qwibitai/nanoclaw "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/add-ollama-tool" ~/.claude/skills/qwibitai-nanoclaw-add-ollama-tool && rm -rf "$T"
.claude/skills/add-ollama-tool/SKILL.mdAdd Ollama Integration
This skill adds a stdio-based MCP server that exposes local Ollama models as tools for the container agent. Claude remains the orchestrator but can offload work to local models, and can optionally manage the model library directly.
Core tools (always available):
— list installed Ollama models with name, size, and familyollama_list_models
— send a prompt to a specified model and return the responseollama_generate
Management tools (opt-in via
OLLAMA_ADMIN_TOOLS=true):
— pull (download) a model from the Ollama registryollama_pull_model
— delete a locally installed model to free disk spaceollama_delete_model
— show model details: modelfile, parameters, and architecture infoollama_show_model
— list models currently loaded in memory with memory usage and processor typeollama_list_running
Phase 1: Pre-flight
Check if already applied
Check if
container/agent-runner/src/ollama-mcp-stdio.ts exists. If it does, skip to Phase 3 (Configure).
Check prerequisites
Verify Ollama is installed and running on the host:
ollama list
If Ollama is not installed, direct the user to https://ollama.com/download.
If no models are installed, suggest pulling one:
You need at least one model. I recommend:
ollama pull gemma3:1b # Small, fast (1GB) ollama pull llama3.2 # Good general purpose (2GB) ollama pull qwen3-coder:30b # Best for code tasks (18GB)
Phase 2: Apply Code Changes
Ensure upstream remote
git remote -v
If
upstream is missing, add it:
git remote add upstream https://github.com/qwibitai/nanoclaw.git
Merge the skill branch
git fetch upstream skill/ollama-tool git merge upstream/skill/ollama-tool
This merges in:
(Ollama MCP server)container/agent-runner/src/ollama-mcp-stdio.ts
(macOS notification watcher)scripts/ollama-watch.sh- Ollama MCP config in
(allowedTools + mcpServers)container/agent-runner/src/index.ts
log surfacing in[OLLAMA]src/container-runner.ts
inOLLAMA_HOST.env.example
If the merge reports conflicts, resolve them by reading the conflicted files and understanding the intent of both sides.
Copy to per-group agent-runner
Existing groups have a cached copy of the agent-runner source. Copy the new files:
for dir in data/sessions/*/agent-runner-src; do cp container/agent-runner/src/ollama-mcp-stdio.ts "$dir/" cp container/agent-runner/src/index.ts "$dir/" done
Validate code changes
npm run build ./container/build.sh
Build must be clean before proceeding.
Phase 3: Configure
Enable model management tools (optional)
Ask the user:
Would you like the agent to be able to manage Ollama models (pull, delete, inspect, list running)?
- Yes — adds tools to pull new models, delete old ones, show model info, and check what's loaded in memory
- No — the agent can only list installed models and generate responses (you manage models yourself on the host)
If the user wants management tools, add to
.env:
OLLAMA_ADMIN_TOOLS=true
If they decline (or don't answer), do not add the variable — management tools will be disabled by default.
Set Ollama host (optional)
By default, the MCP server connects to
http://host.docker.internal:11434 (Docker Desktop) with a fallback to localhost. To use a custom Ollama host, add to .env:
OLLAMA_HOST=http://your-ollama-host:11434
Restart the service
launchctl kickstart -k gui/$(id -u)/com.nanoclaw # macOS # Linux: systemctl --user restart nanoclaw
Phase 4: Verify
Test inference
Tell the user:
Send a message like: "use ollama to tell me the capital of France"
The agent should use
to find available models, thenollama_list_modelsto get a response.ollama_generate
Test model management (if enabled)
If
OLLAMA_ADMIN_TOOLS=true was set, tell the user:
Send a message like: "pull the gemma3:1b model" or "which ollama models are currently loaded in memory?"
The agent should call
orollama_pull_modelrespectively.ollama_list_running
Monitor activity (optional)
Run the watcher script for macOS notifications when Ollama is used:
./scripts/ollama-watch.sh
Check logs if needed
tail -f logs/nanoclaw.log | grep -i ollama
Look for:
— generation started[OLLAMA] >>> Generating
— generation completed[OLLAMA] <<< Done
— pull in progress (management tools)[OLLAMA] Pulling model:
— model removed (management tools)[OLLAMA] Deleted:
Troubleshooting
Agent says "Ollama is not installed"
The agent is trying to run
ollama CLI inside the container instead of using the MCP tools. This means:
- The MCP server wasn't registered — check
has thecontainer/agent-runner/src/index.ts
entry inollamamcpServers - The per-group source wasn't updated — re-copy files (see Phase 2)
- The container wasn't rebuilt — run
./container/build.sh
"Failed to connect to Ollama"
- Verify Ollama is running:
ollama list - Check Docker can reach the host:
docker run --rm curlimages/curl curl -s http://host.docker.internal:11434/api/tags - If using a custom host, check
inOLLAMA_HOST.env
Agent doesn't use Ollama tools
The agent may not know about the tools. Try being explicit: "use the ollama_generate tool with gemma3:1b to answer: ..."
ollama_pull_model
times out on large models
ollama_pull_modelLarge models (7B+) can take several minutes. The tool uses
stream: false so it blocks until complete — this is intentional. For very large pulls, use the host CLI directly: ollama pull <model>
Management tools not showing up
Ensure
OLLAMA_ADMIN_TOOLS=true is set in .env and the service was restarted after adding it.