Skills news-aggregator-skill
Comprehensive news aggregator that fetches, filters, and deeply analyzes real-time content from 28 sources including Hacker News, GitHub, Hugging Face Papers, AI Newsletters, WallStreetCN, Weibo, and Podcasts. Use when user requests 'daily scans', 'tech news', 'finance updates', 'AI briefings', 'deep analysis', or says '如意如意' to open the interactive menu.
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/abigale-cyber/content-system-news-aggregator-skill" ~/.claude/skills/openclaw-skills-news-aggregator-skill && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/abigale-cyber/content-system-news-aggregator-skill" ~/.openclaw/skills/openclaw-skills-news-aggregator-skill && rm -rf "$T"
skills/abigale-cyber/content-system-news-aggregator-skill/SKILL.mdNews Aggregator Skill
Fetch real-time hot news from 28 sources, generate deep analysis reports in Chinese.
🔄 Universal Workflow (3 Steps)
Every news request follows the same workflow, regardless of source or combination:
Step 1: Fetch Data
# Single source python3 scripts/fetch_news.py --source <source_key> --no-save # Multiple sources (comma-separated) python3 scripts/fetch_news.py --source hackernews,github,wallstreetcn --no-save # All sources (broad scan) python3 scripts/fetch_news.py --source all --limit 15 --deep --no-save # With keyword filter (auto-expand: "AI" → "AI,LLM,GPT,Claude,Agent,RAG") python3 scripts/fetch_news.py --source hackernews --keyword "AI,LLM,GPT" --deep --no-save
Step 2: Generate Report
Read the output JSON and format every item using the Unified Report Template below. Translate all content to Simplified Chinese.
Step 3: Save & Present
Save the report to
reports/YYYY-MM-DD/<source>_report.md, then display the full content to the user.
📰 Unified Report Template
All sources use this single template. Show/hide optional fields based on data availability.
#### N. [标题 (中文翻译)](https://original-url.com) - **Source**: 源名 | **Time**: 时间 | **Heat**: 🔥 热度值 - **Links**: [Discussion](hn_url) | [GitHub](gh_url) ← 仅在数据存在时显示 - **Summary**: 一句话中文摘要。 - **Deep Dive**: 💡 **Insight**: 深度分析(背景、影响、技术价值)。
Source-Specific Adaptations
Only the differences from the universal template:
| Source | Adaptation |
|---|---|
| Hacker News | MUST include link |
| GitHub | Use for Heat, add field, add in Deep Dive |
| Hugging Face | Use upvotes for Heat, include if present, write 深度解读 (not just translate abstract) |
| Preserve exact heat text (e.g. "108万") |
🛠️ Tools
fetch_news.py
| Arg | Description | Default |
|---|---|---|
| Source key(s), comma-separated. See table below. | |
| Max items per source | |
| Comma-separated keyword filter | None |
| Download article text for richer analysis | Off |
| Force save to reports dir | Auto for single source |
| Custom output directory | |
Available Sources (28)
| Category | Key | Name |
|---|---|---|
| Global News | | Hacker News |
| 36氪 | |
| 华尔街见闻 | |
| 腾讯新闻 | |
| 微博热搜 | |
| V2EX | |
| Product Hunt | |
| GitHub Trending | |
| AI/Tech | | HF Daily Papers |
| All AI Newsletters (aggregate) | |
| Ben's Bites | |
| Interconnects (Nathan Lambert) | |
| One Useful Thing (Ethan Mollick) | |
| ChinAI (Jeffrey Ding) | |
| Memia | |
| AI to ROI | |
| KDnuggets | |
| Podcasts | | All Podcasts (aggregate) |
| Lex Fridman | |
| 80,000 Hours | |
| Latent Space | |
| Essays | | All Essays (aggregate) |
| Paul Graham | |
| Wait But Why | |
| James Clear | |
| Farnam Street | |
| Scott Young | |
| Dan Koe |
daily_briefing.py (Morning Routines)
Pre-configured multi-source profiles:
python3 scripts/daily_briefing.py --profile <profile>
| Profile | Sources | Instruction File |
|---|---|---|
| HN, 36Kr, GitHub, Weibo, PH, WallStreetCN | |
| WallStreetCN, 36Kr, Tencent | |
| GitHub, HN, Product Hunt | |
| Weibo, V2EX, Tencent | |
| HF Papers, AI Newsletters | |
| Essays, Podcasts | (Use universal template) |
Workflow: Execute script → Read corresponding instruction file → Generate report following both the instruction file AND the universal template.
⚠️ Rules (Strict)
- Language: ALL output in Simplified Chinese (简体中文). Keep well-known English proper nouns (ChatGPT, Python, etc.).
- Time: MANDATORY field. Never skip. If missing in JSON, mark as "Unknown Time". Preserve "Real-time" / "Today" / "Hot" as-is.
- Anti-Hallucination: Only use data from the JSON. Never invent news items. Use simple SVO sentences. Do not fabricate causal relationships.
- Smart Keyword Expansion: When user says "AI" → auto-expand to
. Similar expansions for other domains."AI,LLM,GPT,Claude,Agent,RAG,DeepSeek" - Smart Fill: If results < 5 items in a time window, supplement with high-value items from wider range. Mark supplementary items with ⚠️.
- Save: Always save report to
before displaying.reports/YYYY-MM-DD/
📋 Interactive Menu
When the user says "如意如意" or asks for "menu/help":
- Read
templates.md - Display the menu
- Execute the user's selection using the Universal Workflow above
Requirements
- Python 3.8+,
pip install -r requirements.txt - Playwright (for HF Papers & Ben's Bites):
playwright install chromium