Skills x-research
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/blascokoa/twitterapi-research-skill" ~/.claude/skills/clawdbot-skills-x-research-66e4bf && rm -rf "$T"
skills/blascokoa/twitterapi-research-skill/SKILL.mdX Research
General-purpose agentic research over X/Twitter. Decompose any research question into targeted searches, iteratively refine, follow threads, deep-dive linked content, and synthesize into a sourced briefing.
For twitterapi.io API details (endpoints, operators, response format): read
references/x-api.md.
CLI Tool
All commands run from this skill directory:
cd ~/clawd/skills/x-research source ~/.config/env/global.env # needs TWITTERAPI_IO_KEY
Search
bun run x-search.ts search "<query>" [options]
Options:
— sort order (default: likes)--sort likes|impressions|retweets|recent
— time filter (default: last 7 days). Also accepts minutes (--since 1h|3h|12h|1d|7d
) or ISO timestamps.30m
— filter by minimum likes--min-likes N
— filter by minimum impressions--min-impressions N
— pages to fetch, 1-25 (default: 5, ~20 tweets/page)--pages N
— max results to display (default: 15)--limit N
— quick mode: 1 page, max 10 results, auto noise filter (--quick
), 1hr cache, cost summary-is:retweet -is:reply
— shorthand for--from <username>
in queryfrom:username
— filter low-engagement tweets (≥10 likes, post-hoc)--quality
— exclude replies--no-replies
— save results to--save~/clawd/drafts/x-research-{slug}-{date}.md
— raw JSON output--json
— markdown output for research docs--markdown
Auto-adds
-is:retweet unless query already includes it. All searches display estimated API cost.
Note: twitterapi.io search covers full archive (not limited to 7 days). Time filtering uses
since: operator in the query.
Examples:
bun run x-search.ts search "BNKR" --sort likes --limit 10 bun run x-search.ts search "from:frankdegods" --sort recent bun run x-search.ts search "(opus 4.6 OR claude) trading" --pages 2 --save bun run x-search.ts search "$BNKR (revenue OR fees)" --min-likes 5 bun run x-search.ts search "BNKR" --quick bun run x-search.ts search "BNKR" --from voidcider --quick bun run x-search.ts search "AI agents" --quality --quick
Profile
bun run x-search.ts profile <username> [--count N] [--replies] [--json]
Fetches recent tweets from a specific user (excludes replies by default).
Thread
bun run x-search.ts thread <tweet_id> [--pages N]
Fetches full conversation thread by root tweet ID.
Single Tweet
bun run x-search.ts tweet <tweet_id> [--json]
Watchlist
bun run x-search.ts watchlist # Show all bun run x-search.ts watchlist add <user> [note] # Add account bun run x-search.ts watchlist remove <user> # Remove account bun run x-search.ts watchlist check # Check recent from all
Watchlist stored in
data/watchlist.json. Use for heartbeat integration — check if key accounts posted anything important.
Cache
bun run x-search.ts cache clear # Clear all cached results
15-minute TTL. Avoids re-fetching identical queries.
Research Loop (Agentic)
When doing deep research (not just a quick search), follow this loop:
1. Decompose the Question into Queries
Turn the research question into 3-5 keyword queries using X search operators:
- Core query: Direct keywords for the topic
- Expert voices:
specific known expertsfrom: - Pain points: Keywords like
(broken OR bug OR issue OR migration) - Positive signal: Keywords like
(shipped OR love OR fast OR benchmark) - Links:
orurl:github.com
specific domainsurl: - Noise reduction:
(auto-added), add-is:retweet
if needed-is:reply - Crypto spam: Add
if crypto topics flooding-airdrop -giveaway -whitelist
2. Search and Extract
Run each query via CLI. After each, assess:
- Signal or noise? Adjust operators.
- Key voices worth searching
specifically?from: - Threads worth following via
command?thread - Linked resources worth deep-diving with
?web_fetch
3. Follow Threads
When a tweet has high engagement or is a thread starter:
bun run x-search.ts thread <tweet_id>
4. Deep-Dive Linked Content
When tweets link to GitHub repos, blog posts, or docs, fetch with
web_fetch. Prioritize links that:
- Multiple tweets reference
- Come from high-engagement tweets
- Point to technical resources directly relevant to the question
5. Synthesize
Group findings by theme, not by query:
### [Theme/Finding Title] [1-2 sentence summary] - @username: "[key quote]" (NL, NI) [Tweet](url) - @username2: "[another perspective]" (NL, NI) [Tweet](url) Resources shared: - [Resource title](url) — [what it is]
6. Save
Use
--save flag or save manually to ~/clawd/drafts/x-research-{topic-slug}-{YYYY-MM-DD}.md.
Refinement Heuristics
- Too much noise? Add
, use-is:reply
, narrow keywords--sort likes - Too few results? Broaden with
, remove restrictive operatorsOR - Crypto spam? Add
-$ -airdrop -giveaway -whitelist - Expert takes only? Use
orfrom:--min-likes 50 - Substance over hot takes? Search with
has:links
Heartbeat Integration
On heartbeat, can run
watchlist check to see if key accounts posted anything notable. Flag to Frank only if genuinely interesting/actionable — don't report routine tweets.
File Structure
skills/x-research/ ├── SKILL.md (this file) ├── x-search.ts (CLI entry point) ├── lib/ │ ├── api.ts (twitterapi.io wrapper: search, thread, profile, tweet) │ ├── cache.ts (file-based cache, 15min TTL) │ └── format.ts (Telegram + markdown formatters) ├── data/ │ ├── watchlist.json (accounts to monitor) │ └── cache/ (auto-managed) └── references/ └── x-api.md (twitterapi.io endpoint reference)