rival-search-mcp
Deterministic deep research via RivalSearchMCP. 10 tools: 5-engine web search (DuckDuckGo/Bing/Yahoo/Mojeek/Wikipedia), 9-platform social search (Reddit/HN/StackOverflow/Dev.to/Medium/ProductHunt/Bluesky/Lobste.rs/Lemmy), 5-source news (Google/Bing/Guardian/GDELT/DDG), 5 academic DBs (OpenAlex/CrossRef/arXiv/PubMed/EuropePMC), GitHub search, website mapping, content extraction with OCR, and persistent research workspaces. No API keys required. Use when the user needs web research, competitive analysis, content discovery, or academic paper search.
git clone https://github.com/damionrashford/RivalSearchMCP
T=$(mktemp -d) && git clone --depth=1 https://github.com/damionrashford/RivalSearchMCP "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/rival-search-mcp" ~/.claude/skills/damionrashford-rivalsearchmcp-rival-search-mcp && rm -rf "$T"
skills/rival-search-mcp/SKILL.mdRivalSearchMCP
You have access to 10 research tools via the CLI at
scripts/cli.py. Run all commands with uv run scripts/cli.py.
Every tool returns deterministic, auditable output. There is no in-server LLM — you're the one doing the synthesis.
How to invoke tools
uv run scripts/cli.py call-tool <tool_name> --flag value
Available tools
— concurrent search across DuckDuckGo, Bing, Yahoo, Mojeek, Wikipedia. Use for general web queries.web_search
— Reddit, Hacker News, Stack Overflow, Dev.to, Medium, Product Hunt, Bluesky, Lobste.rs, Lemmy. Use for community discussions.social_search
— Google News, Bing News, The Guardian, GDELT, DuckDuckGo News. Use for current events. Acceptsnews_aggregation
.--time-range day|week|month|anytime
— search public GitHub repos. Use for code, libraries, projects.github_search
— crawl a site inmap_website
/research
/docs
mode. Use to explore site structure or documentation.map
— one tool, six ops (content_operations
,retrieve
,stream
,analyze
,extract
,score
). Use to get full page content, rate source quality, or surface disagreements between sources.find_conflicts
— extract text from PDFs, Word docs, images (image OCR via EasyOCR). Use for document processing.document_analysis
— two modes:research_topic
(search + fetch + findings) andtopic
(fan out to 8 sources for a unified profile). Passentity
to auto-save findings.--session-id
— OpenAlex, CrossRef, arXiv, PubMed, Europe PMC (papers) + Kaggle, HuggingFace, Dataverse, Zenodo (datasets).scientific_research
— persistent workspaces withresearch_memory
/start
/add
/get
/list
. Use to iterate research across calls.delete
When to chain tools
- Found a URL from search? →
content_operations --operation retrieve --url <url> - Want to assess source trust before using results? →
content_operations --operation score --urls '[…]' - Two sources seem to disagree? →
content_operations --operation find_conflicts --urls '[…]' - Found a PDF link? →
document_analysis --url <url> - Need to explore a website? →
map_website --url <url> --mode docs - Doing iterative research? →
once, then passresearch_memory --operation start --topic "..."
to--session-id
on every call.research_topic - Need a unified entity profile in one shot? →
research_topic --mode entity --topic "OpenAI"
Tool reference
For full flags, types, and defaults for each tool, read:
- resources/search.md — web_search, social_search, news_aggregation, github_search, map_website
- resources/content.md — content_operations, document_analysis
- resources/research.md — research_topic, scientific_research, research_memory
Output
All tools return structured text to stdout. Errors go to stderr. Exit codes: 0 success, 1 tool error, 2 connection failed.