Skillshub tavily
Use this skill for web search, extraction, mapping, crawling, and research via Tavily’s REST API when web searches are needed and no built-in tool is available, or when Tavily’s LLM-friendly format is beneficial.
git clone https://github.com/ComeOnOliver/skillshub
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/intellectronica/agent-skills/tavily" ~/.claude/skills/comeonoliver-skillshub-tavily && rm -rf "$T"
skills/intellectronica/agent-skills/tavily/SKILL.mdTavily
Purpose
Provide a curl-based interface to Tavily’s REST API for web search, extraction, mapping, crawling, and optional research. Return structured results suitable for LLM workflows and multi-step investigations.
When to Use
- Use when a task needs live web information, site extraction, mapping, or crawling.
- Use when web searches are needed and no built-in tool is available, or when Tavily’s LLM-friendly output (summaries, chunks, sources, citations) is beneficial.
- Use when a task requires structured search results, extraction, or site discovery from Tavily.
Required Environment
- Require
in the environment.TAVILY_API_KEY - If
is missing, prompt the user to provide the API key before proceeding.TAVILY_API_KEY
Base URL and Auth
- Base URL:
https://api.tavily.com - Authentication:
Authorization: Bearer $TAVILY_API_KEY - Content type:
Content-Type: application/json - Optional project tracking: add
if project attribution is needed.X-Project-ID: <project-id>
Tool Mapping (Tavily REST)
1) search → POST /search
Use for web search with optional answer and content extraction.
Recommended minimal request:
curl -sS -X POST "https://api.tavily.com/search" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TAVILY_API_KEY" \ -d '{ "query": "<query>", "search_depth": "basic", "max_results": 5, "include_answer": true, "include_raw_content": false, "include_images": false }'
Key parameters (all optional unless noted):
(required): search textquery
:search_depth
|basic
|advanced
|fastultra-fast
: 1–3 (advanced only)chunks_per_source
: 0–20max_results
:topic
|general
|newsfinance
:time_rangeday|week|month|year|d|w|m|y
,start_date
:end_dateYYYY-MM-DD
:include_answer
|false
|true
|basicadvanced
:include_raw_content
|false
|true
|markdowntext
: booleaninclude_images
: booleaninclude_image_descriptions
: booleaninclude_favicon
,include_domains
: string arraysexclude_domains
: country name (general topic only)country
: booleanauto_parameters
: booleaninclude_usage
Expected response fields:
(if requested),answer
withresults[]
,title
,url
,content
,score
(optional),raw_content
(optional)favicon
,response_time
,usagerequest_id
2) extract → POST /extract
Use for extracting content from specific URLs.
curl -sS -X POST "https://api.tavily.com/extract" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TAVILY_API_KEY" \ -d '{ "urls": ["https://example.com/article"], "query": "<optional intent for reranking>", "chunks_per_source": 3, "extract_depth": "basic", "format": "markdown", "include_images": false, "include_favicon": false }'
Key parameters:
(required): array of URLsurls
: rerank chunks by intentquery
: 1–5 (only whenchunks_per_source
provided)query
:extract_depth
|basicadvanced
:format
|markdowntext
: 1–60 secondstimeout
: booleaninclude_usage
Expected response fields:
withresults[]
,url
,raw_content
,imagesfavicon
,failed_results[]
,response_time
,usagerequest_id
3) map → POST /map
Use for generating a site map (URL discovery only).
curl -sS -X POST "https://api.tavily.com/map" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TAVILY_API_KEY" \ -d '{ "url": "https://docs.tavily.com", "max_depth": 1, "max_breadth": 20, "limit": 50, "allow_external": true }'
Key parameters:
(required)url
: natural language guidance (raises cost)instructions
: 1–5max_depth
: 1+max_breadth
: 1+limit
,select_paths
,select_domains
,exclude_paths
: arrays of regex stringsexclude_domains
: booleanallow_external
: 10–150 secondstimeout
: booleaninclude_usage
Expected response fields:
,base_url
(list of URLs),results[]
,response_time
,usagerequest_id
4) crawl → POST /crawl
Use for site traversal with built-in extraction.
curl -sS -X POST "https://api.tavily.com/crawl" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TAVILY_API_KEY" \ -d '{ "url": "https://docs.tavily.com", "instructions": "Find all pages about the Python SDK", "max_depth": 1, "max_breadth": 20, "limit": 50, "extract_depth": "basic", "format": "markdown", "include_images": false }'
Key parameters:
(required)url
: optional; raises cost and enablesinstructionschunks_per_source
: 1–5 (only withchunks_per_source
)instructions
,max_depth
,max_breadth
: same as maplimit
:extract_depth
|basicadvanced
:format
|markdowntext
,include_images
,include_faviconallow_external
: 10–150 secondstimeout
: booleaninclude_usage
Expected response fields:
,base_url
withresults[]
,url
,raw_contentfavicon
,response_time
,usagerequest_id
Optional Research Workflow (Deep Investigation)
Use when a query needs multi-step analysis and citations.
create research task → POST /research
curl -sS -X POST "https://api.tavily.com/research" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TAVILY_API_KEY" \ -d '{ "input": "<research question>", "model": "auto", "stream": false, "citation_format": "numbered" }'
Expected response fields:
,request_id
,created_at
(pending),status
,input
,modelresponse_time
get research status → GET /research/{request_id}
curl -sS -X GET "https://api.tavily.com/research/<request_id>" \ -H "Authorization: Bearer $TAVILY_API_KEY"
Expected response fields:
:statuscompleted
: report text or structured objectcontent
:sources[]{ title, url, favicon }
streaming research (SSE)
Set
"stream": true in the POST body and use curl with -N to stream events:
curl -N -X POST "https://api.tavily.com/research" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TAVILY_API_KEY" \ -d '{"input":"<question>","stream":true,"model":"pro"}'
Handle SSE events (tool calls, tool responses, content chunks, sources, done).
Usage Notes
- Treat
,search
,extract
, andmap
as the primary endpoints for discovery and content retrieval.crawl - Return structured results with URLs, titles, and summaries for easy downstream use.
- Default to conservative parameters (
,search_depth: basic
) unless deeper recall is needed.max_results: 5 - Reuse consistent request bodies across calls to keep results predictable.
Error Handling
- If any request returns 401/403, prompt for or re-check
.TAVILY_API_KEY - If timeouts occur, reduce
/max_depth
or uselimit
.search_depth: basic - If responses are too large, lower
ormax_results
.chunks_per_source