Skills grok-scraper
Execute queries to Grok AI via Playwright browser automation without requiring an X API KEY. Use when the user wants to "ask Grok", search X for real-time info, or specifically requests to use Grok for free without API billing.
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aquarius-wing/grok-scraper" ~/.claude/skills/openclaw-skills-grok-scraper && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/aquarius-wing/grok-scraper" ~/.openclaw/skills/openclaw-skills-grok-scraper && rm -rf "$T"
skills/aquarius-wing/grok-scraper/SKILL.mdGrok Scraper
Preview
<video src="./assets/grok-2026-03-15T10-01-45.webm" controls width="100%"></video>
Agent Context: This is a zero-cost alternative to official X APIs. It uses a real browser session (Playwright) via an X Premium account. ALWAYS use this skill when the user wants to query Grok but does not have or want to use an X API KEY.
Prerequisites
- OpenClaw must be installed on the host machine.
- A display/GUI environment is required. This skill launches a real browser window for login. It cannot run on headless cloud servers (no screen). It must be used on a local machine or a remote desktop with a display.
- The user must be logged in to x.com via the browser session saved by
. Without a valid session, all queries will fail.npm run login
First-Time Setup
Run these commands once after cloning the repo, before doing anything else:
cd scripts npm install npx playwright install chromium
Then log in to x.com to create a session:
npm run login # A browser window will open — log in to x.com manually, then return to the terminal and press Enter
The
session/ directory will be created automatically after a successful login.
Workflow
Step 1: Check Login State
- If
directory does not exist: stop and ask the user to runsession/
.cd scripts && npm run login - If it exists: proceed.
Step 2: Execute Query
scripts/run.sh "The user's detailed prompt"
run.sh handles logging, automatic retry on Grok service errors, and login-expiry detection. It is the canonical entry point for all queries.
Step 3: Read Output
- Exit Code 0 → read
and present the result.output/latest.md - Other exit codes → see Error Handling below.
Error Handling
| Exit Code | Meaning | Action |
|---|---|---|
| 0 | Success | Read |
| 2 | Session expired | Ask user to run |
| 3 | Grok service error | already retried once; report failure to user |
| 1 | Extraction failed | Check if was written → if yes, DOM selectors may have broken — see dom-selector-fix.md |
DOM Selectors Breaking
Twitter/X redeploys its front-end regularly, which changes the CSS class names this scraper relies on. If extraction fails with
Method: none, follow the fix guide:
Examples
Standard query
scripts/run.sh "Search for the latest AI news and format as markdown" # → read output/latest.md
Session expired
- Run
→ Exit Code 2scripts/run.sh - Tell user: "Session expired, please run
"cd scripts && npm run login
DOM selectors broken
- Run
→ Exit Code 1,scripts/run.sh
existsoutput/debug-dom.json - Follow dom-selector-fix.md to identify new classes and update
inSELECTORSscripts/scrape.js
Debugging
When diagnosing scraper issues directly, use the bare command — it skips logging and retry logic, making failures easier to inspect.
| Flag | Example | Description |
|---|---|---|
| (none) | | Run with default prompt |
| | Custom prompt |
| | Record video to |
| | Record video to custom path (relative → ) |
| | Set recording resolution (default: ) |
All flags can be combined:
cd scripts npm run scrape -- "Your prompt" --record --size 1920x1080
When
--record is active, the browser runs in headed mode (visible window) with slowMo: 50ms; without it, headless mode is used.