Skills research-tool
Search the web using LLMs via OpenRouter. Use for current web data, API docs, market research, news, fact-checking, or any question that benefits from live internet access and reasoning.
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aaronn/openclaw-search-tool" ~/.claude/skills/clawdbot-skills-research-tool && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/aaronn/openclaw-search-tool" ~/.openclaw/skills/clawdbot-skills-research-tool && rm -rf "$T"
skills/aaronn/openclaw-search-tool/SKILL.mdOpenClaw Research Tool
Web search for OpenClaw agents, powered by OpenRouter. Ask questions in natural language, get accurate answers with cited sources. Defaults to GPT-5.2 which excels at documentation lookups and citation-heavy research.
Note: Even low-effort queries may take 1 minute or more to complete. High/xhigh reasoning can take 10+ minutes depending on complexity. This is normal — the model is searching the web, reading pages, and synthesizing an answer.
Recommended: Run research-tool in a sub-agent so your main session stays responsive:
sessions_spawn task:"research-tool 'your query here'"⚠️ Never set a timeout on exec when running research-tool. Queries routinely take 1-10+ minutes. Use
to background it, then poll — but do NOT setyieldMsor the process will be killed mid-search.timeout
The
:online model suffix gives any model live web access — it searches the web, reads pages, cites URLs, and synthesizes an answer.
Install
cargo install openclaw-search-tool
Requires
OPENROUTER_API_KEY env var. Get a key at https://openrouter.ai/keys
Quick start
research-tool "What are the x.com API rate limits?" research-tool "How do I set reasoning effort parameters on OpenRouter?"
From an OpenClaw agent
# Best: run in a sub-agent (main session stays responsive) sessions_spawn task:"research-tool 'your query here'" # Or via exec — NEVER set timeout, use yieldMs to background: exec command:"research-tool 'your query'" yieldMs:5000 # then poll the session until complete
Flags
--effort
, -e
(default: low
)
--effort-elowControls how much the model reasons before answering. Higher effort means better analysis but slower and more tokens.
research-tool --effort low "What year was Rust 1.0 released?" research-tool --effort medium "Explain how OpenRouter routes requests to different model providers" research-tool --effort high "Compare tradeoffs between Opus 4.6 and gpt-5.3-codex for programming" research-tool --effort xhigh "Deep analysis of React Server Components vs traditional SSR approaches"
| Level | Speed | When to use |
|---|---|---|
| ~1-3 min | Quick fact lookups, simple questions |
| ~2-5 min | Standard research, moderate analysis |
| ~3-10 min | Deep analysis with careful reasoning |
| ~5-20+ min | Maximum reasoning, complex multi-source synthesis |
Can also be set via env var
RESEARCH_EFFORT.
--model
, -m
(default: openai/gpt-5.2:online
)
--model-mopenai/gpt-5.2:onlineWhich model to use. Defaults to GPT-5.2 with the
:online suffix because it excels at questions where citations and accurate documentation lookups matter. The :online suffix enables live web search and works with any model on OpenRouter.
# Default: GPT-5.2 with web search (great for docs and cited answers) research-tool "current weather in San Francisco" # Claude with web search research-tool -m "anthropic/claude-sonnet-4-20250514:online" "Summarize recent changes to the OpenAI API" # GPT-5.2 without web search (training data only) research-tool -m "openai/gpt-5.2" "Explain the React Server Components architecture" # Any OpenRouter model research-tool -m "google/gemini-2.5-pro:online" "Compare React vs Svelte in 2026"
Can also be set via env var
RESEARCH_MODEL.
--system
, -s
--system-sOverride the system prompt to give the model a specific persona or instructions.
research-tool -s "You are a senior infrastructure engineer" "Best practices for zero-downtime Kubernetes deployments" research-tool -s "You are a Rust systems programmer" "Best async patterns for WebSocket servers"
--stdin
--stdinRead the query from stdin. Useful for long or multiline queries.
echo "Explain the OpenRouter model routing architecture" | research-tool --stdin cat detailed-prompt.txt | research-tool --stdin
--max-tokens
(default: 12800
)
--max-tokens12800Maximum tokens in the response.
--timeout
(optional, no default)
--timeoutNo timeout by default — queries run until the model finishes. Set this only if you need a hard upper bound (e.g.
--timeout 300).
Output format
- stdout: Response text only (markdown with citations) — pipe-friendly
- stderr: Progress status, reasoning traces, and token usage
🔍 Researching with openai/gpt-5.2:online (effort: high)... ✅ Connected — waiting for response... [response text on stdout] 📊 Tokens: 4470 prompt + 184 completion = 4654 total | ⏱ 5s
Status indicators
— request sent to OpenRouter🔍 Researching...
— server accepted the request, model is searching/thinking✅ Connected — waiting for response...
— elapsed time ticks (only in interactive terminals, not in agent exec)⏳ 15s... ⏳ 30s...
— couldn't reach OpenRouter (network issue)❌ Connection to OpenRouter failed
— connection dropped while waiting. Retry?❌ Connection to OpenRouter lost
Tips for better results
- Write in natural language. "What are the best practices for Rust error handling and when should you use anyhow vs thiserror?" works better than keyword-style queries.
- Provide maximum context. The model starts from zero. Include background, what you already know, and all related sub-questions. Detailed prompts massively outperform vague ones.
- Use effort levels appropriately.
for quick facts,low
for real research,high
only for complex multi-source analysis.xhigh - Use
for domain expertise. A specific persona produces noticeably better domain-specific answers.-s
Cost
~$0.01–0.05 per query. Token usage is printed to stderr after each query.