GB-Power-Market-JJ harpa-grid
Automate web browsers, scrape pages, search the web, and run AI prompts on live websites via HARPA AI Grid REST API
git clone https://github.com/GeorgeDoors888/GB-Power-Market-JJ
T=$(mktemp -d) && git clone --depth=1 https://github.com/GeorgeDoors888/GB-Power-Market-JJ "$T" && mkdir -p ~/.claude/skills && cp -r "$T/openclaw-skills/skills/alxsharuk/harpa-ai" ~/.claude/skills/georgedoors888-gb-power-market-jj-harpa-grid && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/GeorgeDoors888/GB-Power-Market-JJ "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/openclaw-skills/skills/alxsharuk/harpa-ai" ~/.openclaw/skills/georgedoors888-gb-power-market-jj-harpa-grid && rm -rf "$T"
openclaw-skills/skills/alxsharuk/harpa-ai/SKILL.mdHARPA Grid — Browser Automation API
HARPA Grid lets you orchestrate real web browsers remotely. You can scrape pages, search the web, run built-in or custom AI commands, and send AI prompts with full page context — all through a single REST endpoint.
Prerequisites
The user must have:
- HARPA AI Chrome Extension installed from https://harpa.ai
- At least one active Node — a browser with HARPA running (configured in the extension's AUTOMATE tab)
- A HARPA API key — obtained from the HARPA extension AUTOMATE tab. The key is provided as the
environment variable.HARPA_API_KEY
If the user hasn't set up HARPA yet, direct them to: https://harpa.ai/grid/browser-automation-node-setup
API Reference
Endpoint:
POST https://api.harpa.ai/api/v1/grid
Auth: Authorization: Bearer $HARPA_API_KEY
Content-Type: application/json
Full reference: https://harpa.ai/grid/grid-rest-api-reference
Actions
1. Scrape a Web Page
Extract full page content (as markdown) or specific elements via CSS/XPath/text selectors.
Full page scrape:
curl -s -X POST https://api.harpa.ai/api/v1/grid \ -H "Authorization: Bearer $HARPA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "action": "scrape", "url": "https://example.com", "timeout": 15000 }'
Targeted element scrape (grab):
curl -s -X POST https://api.harpa.ai/api/v1/grid \ -H "Authorization: Bearer $HARPA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "action": "scrape", "url": "https://example.com/products", "grab": [ { "selector": ".product-title", "selectorType": "css", "at": "all", "take": "innerText", "label": "titles" }, { "selector": ".product-price", "selectorType": "css", "at": "all", "take": "innerText", "label": "prices" } ], "timeout": 15000 }'
Grab fields:
| Field | Required | Default | Values |
|---|---|---|---|
| selector | yes | — | CSS (, ), XPath (), or text content |
| selectorType | no | auto | , , , |
| at | no | first | , , , or a number |
| take | no | innerText | , , , , , , , , , , , |
| label | no | data | Custom label for extracted data |
2. Search the Web (SERP)
Perform a web search. Supports operators like
site:, intitle:.
curl -s -X POST https://api.harpa.ai/api/v1/grid \ -H "Authorization: Bearer $HARPA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "action": "serp", "query": "OpenClaw AI agent framework", "timeout": 15000 }'
3. Run an AI Command
Execute one of 100+ built-in HARPA commands or a custom automation on a target page.
curl -s -X POST https://api.harpa.ai/api/v1/grid \ -H "Authorization: Bearer $HARPA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "action": "command", "url": "https://example.com/article", "name": "Extract data", "inputs": "List all headings with their word counts", "connection": "HARPA AI", "resultParam": "message", "timeout": 30000 }'
— command name (e.g.name
,"Summary"
, or any custom command)"Extract data"
— pre-filled user inputs for multi-step commandsinputs
— HARPA parameter to return as result (default:resultParam
)"message"
— AI model to use (e.g.connection
,"HARPA AI"
,"gpt-4o"
)"claude-3.5-sonnet"
4. Run an AI Prompt
Send a custom AI prompt with page context. Use
{{page}} to inject the page content.
curl -s -X POST https://api.harpa.ai/api/v1/grid \ -H "Authorization: Bearer $HARPA_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "action": "prompt", "url": "https://example.com", "prompt": "Analyze the current page and extract all contact information. Webpage: {{page}}", "connection": "CHAT AUTO", "timeout": 30000 }'
Common Parameters
| Parameter | Required | Default | Description |
|---|---|---|---|
| action | yes | — | , , , or |
| url | no | — | Target page URL (ignored by ) |
| node | no | — | Node ID (), multiple (), first N (), or all () |
| timeout | no | 300000 | Max wait time in ms (max 5 minutes) |
| resultsWebhook | no | — | URL to POST results to asynchronously (retained 30 days) |
| connection | no | — | AI model for / actions |
Node Targeting
- Omit
to use the default nodenode
— target a specific node by ID"node": "mynode"
— target multiple nodes"node": "node1 node2"
— use first 3 available nodes"node": "3"
— broadcast to all nodes"node": "*"
Async Results via Webhook
Set
resultsWebhook to receive results asynchronously. The action stays alive for up to 30 days, useful when target nodes are temporarily offline.
{ "action": "scrape", "url": "https://example.com", "resultsWebhook": "https://your-server.com/webhook", "timeout": 15000 }
Tips
- Scraping behind-login pages works because HARPA runs inside a real browser session with the user's cookies and auth state.
- Use the
array with multiple selectors to extract structured data in a single request.grab - For long-running AI commands, increase
(max 300000ms / 5 min) or usetimeout
.resultsWebhook - The
variable in prompts injects the full page content — use it to give AI context about the current page.{{page}}