install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/arrmlet/social-data" ~/.claude/skills/openclaw-skills-social-data && rm -rf "$T"
OpenClaw · Install into ~/.openclaw/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/arrmlet/social-data" ~/.openclaw/skills/openclaw-skills-social-data && rm -rf "$T"
manifest:
skills/arrmlet/social-data/SKILL.mdsource content
Macrocosmos SN13 API - Social Media Data Skill
Fetch real-time social media data from X (Twitter) and Reddit by keyword, username, date range, and filters with engagement metrics via Macrocosmos SN13 API on Bittensor.
Metadata
- name: macrocosmos-social-data
- version: 1.0.1
- homepage: https://github.com/macrocosm-os/macrocosmos-mcp
- source: https://github.com/macrocosm-os/macrocosmos-mcp
- pypi: https://pypi.org/project/macrocosmos-mcp
- subnet: Bittensor SN13 (Data Universe)
- author: Macrocosmos AI
- license: MIT
Required Environment Variables
| Variable | Required | Type | Description |
|---|---|---|---|
| Yes | | Macrocosmos API key. Required for all API requests. Get your free key at https://app.macrocosmos.ai/account?tab=api-keys |
Setup: The
MC_API key must be set as an environment variable. It is passed as a Bearer token in the Authorization header for REST calls, or provided directly to the Python SDK client.
API Endpoint
POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData
Headers
Content-Type: application/json Authorization: Bearer <YOUR_MC_API_KEY>
Request Format
{ "source": "X", "usernames": ["@elonmusk"], "keywords": ["AI", "bittensor"], "start_date": "2026-01-01", "end_date": "2026-02-10", "limit": 10, "keyword_mode": "any" }
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| string | Yes | or (case-sensitive) |
| array | No | Up to 5 usernames. optional. X only (not available for Reddit) |
| array | No | Up to 5 keywords/hashtags. For Reddit: use subreddit format |
| string | No | or ISO format. Defaults to 24h ago |
| string | No | or ISO format. Defaults to now |
| int | No | 1-1000 results. Default: 10 |
| string | No | (default) matches ANY keyword, requires ALL keywords |
Response Format
{ "data": [ { "datetime": "2026-02-10T17:30:58Z", "source": "x", "text": "Tweet content here", "uri": "https://x.com/username/status/123456", "user": { "username": "example_user", "display_name": "Example User", "followers_count": 1500, "following_count": 300, "user_description": "Bio text", "user_blue_verified": true, "profile_image_url": "https://pbs.twimg.com/..." }, "tweet": { "id": "123456", "like_count": 42, "retweet_count": 10, "reply_count": 5, "quote_count": 2, "view_count": 5000, "bookmark_count": 3, "hashtags": ["#AI", "#bittensor"], "language": "en", "is_reply": false, "is_quote": false, "conversation_id": "123456" } } ] }
curl Examples
1. Keyword Search on X
curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "X", "keywords": ["bittensor"], "start_date": "2026-01-01", "limit": 10 }'
2. Fetch Tweets from a Specific User
curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "X", "usernames": ["@MacrocosmosAI"], "start_date": "2026-01-01", "limit": 10 }'
3. Multi-Keyword AND Search
curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "X", "keywords": ["chutes", "bittensor"], "keyword_mode": "all", "start_date": "2026-01-01", "limit": 20 }'
4. Reddit Search
curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "REDDIT", "keywords": ["r/MachineLearning", "transformers"], "start_date": "2026-02-01", "limit": 50 }'
5. User + Keyword Filter
curl -s -X POST https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "source": "X", "usernames": ["@opentensor"], "keywords": ["subnet"], "start_date": "2026-01-01", "limit": 20 }'
Python Examples
Using the macrocosmos
SDK
macrocosmosimport asyncio import macrocosmos as mc async def search_tweets(): client = mc.AsyncSn13Client(api_key="YOUR_API_KEY") response = await client.sn13.OnDemandData( source="X", keywords=["bittensor"], usernames=[], start_date="2026-01-01", end_date=None, limit=10, keyword_mode="any", ) if hasattr(response, "model_dump"): data = response.model_dump() for tweet in data["data"]: print(f"@{tweet['user']['username']}: {tweet['text'][:100]}") print(f" Likes: {tweet['tweet']['like_count']} | Views: {tweet['tweet']['view_count']}") asyncio.run(search_tweets())
Using requests
(REST)
requestsimport requests url = "https://constellation.api.cloud.macrocosmos.ai/sn13.v1.Sn13Service/OnDemandData" headers = { "Content-Type": "application/json", "Authorization": "Bearer YOUR_API_KEY" } payload = { "source": "X", "keywords": ["bittensor"], "start_date": "2026-01-01", "limit": 10 } response = requests.post(url, json=payload, headers=headers) data = response.json() for tweet in data["data"]: print(f"@{tweet['user']['username']}: {tweet['text'][:100]}")
Tips & Known Behaviors
What works reliably
- High-volume keyword searches: Popular terms like "bittensor", "AI", "iran", "lfg" return fast
- Wider date ranges: Setting
further back (e.g., weeks/months) improves resultsstart_date
: Great for finding intersection of two topics (e.g., "chutes" AND "bittensor")keyword_mode: "all"
What can be flaky
- Username-only queries: Can timeout (DEADLINE_EXCEEDED). Adding
far back helpsstart_date - Niche/low-volume keywords: Very specific terms may timeout if miners don't have data indexed
- No
: Defaults to last 24h which can miss data; set explicitly for best resultsstart_date
Best practices for LLM agents
- Always set
— don't rely on the 24h default. Use at least 7 days back for user queriesstart_date - Prefer keywords over usernames — keyword searches are more reliable
- For username queries, always include
set weeks/months backstart_date - Use
when combining a topic with a subtopic (e.g., "bittensor" + "chutes")keyword_mode: "all" - Handle timeouts gracefully — if a query times out, retry with broader date range or switch to keyword search
- Parse engagement metrics —
,view_count
,like_count
help rank relevanceretweet_count - Check
andis_reply
— filter for original tweets vs replies depending on use caseis_quote
Gravity API (Large-Scale Collection)
For datasets larger than 1000 results, use the Gravity endpoints:
Create Task
POST /gravity.v1.GravityService/CreateGravityTask
{ "gravity_tasks": [ {"platform": "x", "topic": "#bittensor", "keyword": "dTAO"} ], "name": "Bittensor dTAO Collection" }
Note: X topics MUST start with
# or $. Reddit topics use subreddit format.
Check Status
POST /gravity.v1.GravityService/GetGravityTasks
{ "gravity_task_id": "multicrawler-xxxx-xxxx", "include_crawlers": true }
Build Dataset
POST /gravity.v1.GravityService/BuildDataset
{ "crawler_id": "crawler-0-multicrawler-xxxx", "max_rows": 10000 }
Warning: Building stops the crawler permanently.
Get Dataset Download
POST /gravity.v1.GravityService/GetDataset
{ "dataset_id": "dataset-xxxx-xxxx" }
Returns Parquet file download URLs when complete.
Workflow Summary
Quick Query (< 1000 results): OnDemandData → instant results Large Collection (7-day crawl): CreateGravityTask → GetGravityTasks (monitor) → BuildDataset → GetDataset (download)
Error Reference
| Error | Cause | Fix |
|---|---|---|
| Missing or invalid API key | Check header |
| Server-side issue (often auth via gRPC) | Verify API key, retry |
| Query timeout — miners can't fulfill request | Use broader date range, switch to keyword search |
Empty array | No matching results | Broaden search terms or date range |