Claudeclaw add-ollama-tool
Add Ollama MCP server so the container agent can call local models for cheaper/faster tasks like summarization, translation, or general queries.
git clone https://github.com/sbusso/claudeclaw
T=$(mktemp -d) && git clone --depth=1 https://github.com/sbusso/claudeclaw "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/add-ollama-tool" ~/.claude/skills/sbusso-claudeclaw-add-ollama-tool && rm -rf "$T"
skills/add-ollama-tool/SKILL.mdAdd Ollama Integration
This skill adds a stdio-based MCP server that exposes local Ollama models as tools for the container agent. Claude remains the orchestrator but can offload work to local models.
Tools added:
— lists installed Ollama modelsollama_list_models
— sends a prompt to a specified model and returns the responseollama_generate
Phase 1: Pre-flight
Check if already applied
Check if
agent/runner/src/ollama-mcp-stdio.ts exists. If it does, skip to Phase 3 (Configure).
Check prerequisites
Verify Ollama is installed and running on the host:
ollama list
If Ollama is not installed, direct the user to https://ollama.com/download.
If no models are installed, suggest pulling one:
You need at least one model. I recommend:
ollama pull gemma3:1b # Small, fast (1GB) ollama pull llama3.2 # Good general purpose (2GB) ollama pull qwen3-coder:30b # Best for code tasks (18GB)
Phase 2: Apply Code Changes
Ensure upstream remote
git remote -v
If
upstream is missing, add it:
git remote add upstream https://github.com/sbusso/claudeclaw.git
Merge the skill branch
git fetch upstream skill/ollama-tool git merge upstream/skill/ollama-tool
This merges in:
(Ollama MCP server)agent/runner/src/ollama-mcp-stdio.ts
(macOS notification watcher)scripts/ollama-watch.sh- Ollama MCP config in
(allowedTools + mcpServers)agent/runner/src/index.ts
log surfacing in[OLLAMA]src/orchestrator/container-runner.ts
inOLLAMA_HOST.env.example
If the merge reports conflicts, resolve them by reading the conflicted files and understanding the intent of both sides.
Copy to per-group agent-runner
Existing groups have a cached copy of the agent-runner source. Copy the new files:
for dir in data/sessions/*/agent-runner-src; do cp agent/runner/src/ollama-mcp-stdio.ts "$dir/" cp agent/runner/src/index.ts "$dir/" done
Validate code changes
npm run build ./src/runtimes/docker/build.sh
Build must be clean before proceeding.
Phase 3: Configure
Set Ollama host (optional)
By default, the MCP server connects to
http://host.docker.internal:11434 (Docker Desktop) with a fallback to localhost. To use a custom Ollama host, add to .env:
OLLAMA_HOST=http://your-ollama-host:11434
Service name: Derived from the directory name:
(macOS) /com.claudeclaw.<dirname>(Linux). For example, if cwd isclaudeclaw-<dirname>, the service ismy-assistant. Determine the correct service name before running service commands below.com.claudeclaw.my-assistant
Restart the service
launchctl kickstart -k gui/$(id -u)/com.claudeclaw # macOS # Linux: systemctl --user restart claudeclaw
Phase 4: Verify
Test via WhatsApp
Tell the user:
Send a message like: "use ollama to tell me the capital of France"
The agent should use
to find available models, thenollama_list_modelsto get a response.ollama_generate
Monitor activity (optional)
Run the watcher script for macOS notifications when Ollama is used:
./scripts/ollama-watch.sh
Check logs if needed
tail -f logs/claudeclaw.log | grep -i ollama
Look for:
— agent used Ollama successfullyAgent output: ... Ollama ...
— generation started (if log surfacing works)[OLLAMA] >>> Generating
— generation completed[OLLAMA] <<< Done
Troubleshooting
Agent says "Ollama is not installed"
The agent is trying to run
ollama CLI inside the container instead of using the MCP tools. This means:
- The MCP server wasn't registered — check
has theagent/runner/src/index.ts
entry inollamamcpServers - The per-group source wasn't updated — re-copy files (see Phase 2)
- The container wasn't rebuilt — run
./src/runtimes/docker/build.sh
"Failed to connect to Ollama"
- Verify Ollama is running:
ollama list - Check Docker can reach the host:
docker run --rm curlimages/curl curl -s http://host.docker.internal:11434/api/tags - If using a custom host, check
inOLLAMA_HOST.env
Agent doesn't use Ollama tools
The agent may not know about the tools. Try being explicit: "use the ollama_generate tool with gemma3:1b to answer: ..."