git clone https://github.com/ComeOnOliver/skillshub
T=$(mktemp -d) && git clone --depth=1 https://github.com/ComeOnOliver/skillshub "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/K-Dense-AI/claude-scientific-skills/parallel-web" ~/.claude/skills/comeonoliver-skillshub-parallel-web && rm -rf "$T"
skills/K-Dense-AI/claude-scientific-skills/parallel-web/SKILL.mdParallel Web Systems API
Overview
This skill provides access to Parallel Web Systems APIs for web search, deep research, and content extraction. It is the primary tool for all web-related operations in the scientific writer workflow.
Primary interface: Parallel Chat API (OpenAI-compatible) for search and research. Secondary interface: Extract API for URL verification and special cases only.
API Documentation: https://docs.parallel.ai API Key: https://platform.parallel.ai Environment Variable:
PARALLEL_API_KEY
When to Use This Skill
Use this skill for ALL of the following:
- Web Search: Any query that requires searching the internet for information
- Deep Research: Comprehensive research reports on any topic
- Market Research: Industry analysis, competitive intelligence, market data
- Current Events: News, recent developments, announcements
- Technical Information: Documentation, specifications, product details
- Statistical Data: Market sizes, growth rates, industry figures
- General Information: Company profiles, facts, comparisons
Use Extract API only for:
- Citation verification (confirming a specific URL's content)
- Special cases where you need raw content from a known URL
Do NOT use this skill for:
- Academic-specific paper searches (use
which routes to Perplexity for purely academic queries)research-lookup - Google Scholar / PubMed database searches (use
skill)citation-management
Two Capabilities
1. Web Search (search
command)
searchSearch the web via the Parallel Chat API (
base model) and get a synthesized summary with cited sources.
Best for: General web searches, current events, fact-finding, technical lookups, news, market data.
# Basic search python scripts/parallel_web.py search "latest advances in quantum computing 2025" # Use core model for more complex queries python scripts/parallel_web.py search "compare EV battery chemistries NMC vs LFP" --model core # Save results to file python scripts/parallel_web.py search "renewable energy policy updates" -o results.txt # JSON output for programmatic use python scripts/parallel_web.py search "AI regulation landscape" --json -o results.json
Key Parameters:
: Natural language description of what you want to findobjective
: Chat model to use (--model
default, orbase
for deeper research)core
: Output file path-o
: Output as JSON--json
Response includes: Synthesized summary organized by themes, with inline citations and a sources list.
2. Deep Research (research
command)
researchRun comprehensive multi-source research via the Parallel Chat API (
core model) that produces detailed intelligence reports with citations.
Best for: Market research, comprehensive analysis, competitive intelligence, technology surveys, industry reports, any research question requiring synthesis of multiple sources.
# Default deep research (core model) python scripts/parallel_web.py research "comprehensive analysis of the global EV battery market" # Save research report to file python scripts/parallel_web.py research "AI adoption in healthcare 2025" -o report.md # Use base model for faster, lighter research python scripts/parallel_web.py research "latest funding rounds in AI startups" --model base # JSON output python scripts/parallel_web.py research "renewable energy storage market in Europe" --json -o data.json
Key Parameters:
: Research question or topicquery
: Chat model to use (--model
default for deep research, orcore
for faster results)base
: Output file path-o
: Output as JSON--json
3. URL Extraction (extract
command) — Verification Only
extractExtract content from specific URLs. Use only for citation verification and special cases.
For general research, use
search or research instead.
# Verify a citation's content python scripts/parallel_web.py extract "https://example.com/article" --objective "key findings" # Get full page content for verification python scripts/parallel_web.py extract "https://docs.example.com/api" --full-content # Save extraction to file python scripts/parallel_web.py extract "https://paper-url.com" --objective "methodology" -o extracted.md
Model Selection Guide
The Chat API supports two research models. Use
base for most searches and core for deep research.
| Model | Latency | Strengths | Use When |
|---|---|---|---|
| 15s-100s | Standard research, factual queries | Web searches, quick lookups |
| 60s-5min | Complex research, multi-source synthesis | Deep research, comprehensive reports |
Recommendations:
command defaults tosearch
— fast, good for most queriesbase
command defaults toresearch
— thorough, good for comprehensive reportscore- Override with
when you need different depth/speed tradeoffs--model
Python API Usage
Search
from parallel_web import ParallelSearch searcher = ParallelSearch() result = searcher.search( objective="Find latest information about transformer architectures in NLP", model="base", ) if result["success"]: print(result["response"]) # Synthesized summary for src in result["sources"]: print(f" {src['title']}: {src['url']}")
Deep Research
from parallel_web import ParallelDeepResearch researcher = ParallelDeepResearch() result = researcher.research( query="Comprehensive analysis of AI regulation in the EU and US", model="core", ) if result["success"]: print(result["response"]) # Full research report print(f"Citations: {result['citation_count']}")
Extract (Verification Only)
from parallel_web import ParallelExtract extractor = ParallelExtract() result = extractor.extract( urls=["https://docs.example.com/api-reference"], objective="API authentication methods and rate limits", ) if result["success"]: for r in result["results"]: print(r["excerpts"])
MANDATORY: Save All Results to Sources Folder
Every web search and deep research result MUST be saved to the project's
folder.sources/
This ensures all research is preserved for reproducibility, auditability, and context window recovery.
Saving Rules
| Operation | Flag Target | Filename Pattern |
|---|---|---|
| Web Search | | |
| Deep Research | | |
| URL Extract | | |
How to Save (Always Use -o
Flag)
-oCRITICAL: Every call to
MUST include the parallel_web.py
flag pointing to the -o
folder.sources/
# Web search — ALWAYS save to sources/ python scripts/parallel_web.py search "latest advances in quantum computing 2025" \ -o sources/search_20250217_143000_quantum_computing.md # Deep research — ALWAYS save to sources/ python scripts/parallel_web.py research "comprehensive analysis of the global EV battery market" \ -o sources/research_20250217_144000_ev_battery_market.md # URL extraction (verification only) — save to sources/ python scripts/parallel_web.py extract "https://example.com/article" --objective "key findings" \ -o sources/extract_20250217_143500_example_article.md
Why Save Everything
- Reproducibility: Every claim in the final document can be traced back to its raw source material
- Context Window Recovery: If context is compacted mid-task, saved results can be re-read from
sources/ - Audit Trail: The
folder provides complete transparency into how information was gatheredsources/ - Reuse Across Sections: Saved research can be referenced by multiple sections without duplicate API calls
- Cost Efficiency: Avoid redundant API calls by checking
for existing resultssources/ - Peer Review Support: Reviewers can verify the research backing every claim
Logging
When saving research results, always log:
[HH:MM:SS] SAVED: Search results to sources/search_20250217_143000_quantum_computing.md [HH:MM:SS] SAVED: Deep research report to sources/research_20250217_144000_ev_battery_market.md
Before Making a New Query, Check Sources First
Before calling
parallel_web.py, check if a relevant result already exists in sources/:
ls sources/ # Check existing saved results
Integration with Scientific Writer
Routing Table
| Task | Tool | Command |
|---|---|---|
| Web search (any) | | |
| Deep research | | |
| Citation verification | | |
| Academic paper search | | Routes to Perplexity sonar-pro-search |
| DOI/metadata lookup | | Extract from DOI URLs (verification) |
When Writing Scientific Documents
- Before writing any section, use
orsearch
to gather background information — save results toresearchsources/ - For academic citations, use
(which routes academic queries to Perplexity) — save results toresearch-lookupsources/ - For citation verification (confirming a specific URL), use
— save results toparallel_web.py extractsources/ - For current market/industry data, use
— save results toparallel_web.py research --model coresources/ - Before any new query, check
for existing results to avoid duplicate API callssources/
Environment Setup
# Required: Set your Parallel API key export PARALLEL_API_KEY="your_api_key_here" # Required Python packages pip install openai # For Chat API (search/research) pip install parallel-web # For Extract API (verification only)
Get your API key at https://platform.parallel.ai
Error Handling
The script handles errors gracefully and returns structured error responses:
{ "success": false, "error": "Error description", "timestamp": "2025-02-14 12:00:00" }
Common issues:
: Set the environment variablePARALLEL_API_KEY not set
: Runopenai not installedpip install openai
: Runparallel-web not installed
(only needed for extract)pip install parallel-web
: Wait and retry (default: 300 req/min for Chat API)Rate limit exceeded
Complementary Skills
| Skill | Use For |
|---|---|
| Academic paper searches (routes to Perplexity for scholarly queries) |
| Google Scholar, PubMed, CrossRef database searches |
| Systematic literature reviews across academic databases |
| Generate diagrams from research findings |