Awesome-claude aggregated-search

Aggregated Search Skill

install
source · Clone the upstream repo
git clone https://github.com/tsaol/awesome-claude
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/tsaol/awesome-claude "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aggregated-search" ~/.claude/skills/tsaol-awesome-claude-aggregated-search && rm -rf "$T"
manifest: skills/aggregated-search/SKILL.md
source content

Aggregated Search Skill

Multi-source content aggregation for hot topics research. Supports 15+ data sources.

Usage

/aggregated-search <keyword> [options]

Options:

  • --sources=all
    - Search all sources (default)
  • --sources=github,hn,reddit
    - Specific sources
  • --limit=50
    - Max results per source (default: 50)
  • --days=7
    - Content age limit in days (default: 7)
  • --lang=en
    - Language: en, zh, all (default: all)
  • --expand
    - Enable query expansion (auto-generate related terms)

Examples:

/aggregated-search "agentic AI"
/aggregated-search "LLM agents" --sources=github,hn,arxiv
/aggregated-search "大模型" --sources=chinese --lang=zh
/aggregated-search "RAG" --limit=100 --days=30

Supported Sources (15+)

Code & Projects

SourceFileAPIFree
GitHubgithub.mdgh api
Papers With Codepapers-with-code.mdREST

Tech Communities

SourceFileAPIFree
Hacker Newshackernews.mdAlgolia
Redditreddit.mdJSON
DEV.todevto.mdREST
Product Huntproducthunt.mdGraphQL

Academic

SourceFileAPIFree
ArXivarxiv.mdXML
Semantic Scholarsemantic-scholar.mdREST
Papers With Codepapers-with-code.mdREST

News & Media

SourceFileAPIFree
Tech News (Multi)tech-news.mdWebFetch
Mediummedium.mdWebFetch

Chinese Sources (中文源)

SourceFileAPIFree
36氪/少数派/掘金/知乎/机器之心chinese-tech.mdMixed

Social Media

SourceFileAPIFree
Twitter/Xtwitter.mdNitter
YouTubeyoutube.mdWebFetch

Meta Search (Recommended)

SourceFileAPIFree
Tavilytavily.mdREST1000/mo

Source Groups

Use these shortcuts for common combinations:

GroupSources
--sources=code
github, papers-with-code
--sources=community
hn, reddit, devto
--sources=academic
arxiv, semantic-scholar, papers-with-code
--sources=news
tavily, tech-news, medium
--sources=chinese
36kr, sspai, juejin, zhihu, jiqizhixin
--sources=social
twitter, youtube, producthunt
--sources=all
All sources

Workflow

Step 0: Query Expansion (if --expand)

If

--expand
is enabled, generate related terms before searching:

Original: "agentic commerce"
    ↓
Expanded (max 5):
  - agentic commerce (original)
  - AI shopping agent
  - conversational commerce
  - e-commerce AI assistant
  - 智能购物

Use the prompt in

sources/query-expansion.md
to generate max 4 related terms (5 total).

Step 1: Parse Input

Extract keyword, sources, limit, days, language from user input.

Step 2: Parallel Search

CRITICAL: Search all sources in parallel using multiple tool calls in a single message.

For each source:

  1. Read source instruction from
    sources/{source}.md
  2. Execute API call or WebFetch
  3. Parse results

Step 3: Aggregate & Deduplicate

  1. Merge all results
  2. Deduplicate by URL and title similarity (>80% = duplicate)
  3. Sort by: relevance score, date, engagement
  4. Tag with source name

Step 4: Output

Generate

raw/aggregated.md
:

# Aggregated Search: {keyword}

**Sources:** {count} sources searched
**Results:** {total} unique items
**Generated:** {timestamp}

---

## Summary (via Tavily AI)
> AI-generated summary of the topic...

## GitHub ({count})
| # | Repository | Stars | Description |
|---|------------|-------|-------------|

## Hacker News ({count})
| # | Title | Points | Comments |
|---|-------|--------|----------|

## Academic Papers ({count})
| # | Title | Year | Citations |
|---|-------|------|-----------|

## News & Blogs ({count})
| # | Title | Source | Date |
|---|-------|--------|------|

## Chinese Sources ({count})
| # | 标题 | 来源 | 日期 |
|---|------|------|------|

---

## Statistics
- Total sources: {sources_count}
- Total results: {total_count}
- Unique results: {unique_count}
- Date range: {earliest} to {latest}

Environment Variables

# Required for full functionality
export TAVILY_API_KEY="your-key"        # Tavily search

# Optional
export YOUTUBE_API_KEY="your-key"       # YouTube API
export TWITTER_BEARER_TOKEN="your-key"  # Twitter API (paid)
export PRODUCTHUNT_TOKEN="your-key"     # Product Hunt API

Integration

Works with ai-writing hottrend pipeline:

/aggregated-search "topic"
        ↓
  raw/aggregated.md
        ↓
  hottrend-draft agent
        ↓
  output/v1_draft.md