Skills openclaw-tool-executor
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/avinash-kamath/scalekit-agent-auth" ~/.claude/skills/openclaw-skills-openclaw-tool-executor && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/avinash-kamath/scalekit-agent-auth" ~/.openclaw/skills/openclaw-skills-openclaw-tool-executor && rm -rf "$T"
skills/avinash-kamath/scalekit-agent-auth/SKILL.mdOpenClaw Tool Executor
General-purpose tool executor for OpenClaw agents. Uses Scalekit Connect to discover and run tools for any connected service — OAuth (Notion, Slack, Gmail, GitHub, etc.) or non-OAuth (API Key, Bearer, Basic auth).
Environment Variables
Required in
.env:
TOOL_CLIENT_ID=<scalekit_client_id> TOOL_CLIENT_SECRET=<scalekit_client_secret> TOOL_ENV_URL=<scalekit_environment_url> TOOL_IDENTIFIER=<default_identifier> # optional but recommended
TOOL_IDENTIFIER is used as the default --identifier for all operations. If not set, the script will prompt the user at runtime and display a warning advising them to set it in .env.
Execution Flow
When the user asks to perform an action on a connected service, follow these steps in order:
Step 1 — Discover the Connection
Dynamically resolve the
connection_name by listing all configured connections for the provider. The API paginates automatically through all pages:
uv run tool_exec.py --list-connections --provider <PROVIDER>
- Only consider connections with
— ignore any with"status": "COMPLETED"
,DRAFT
, or other non-completed statuses.PENDING - Use the
from the first COMPLETED result askey_id
for all subsequent steps.<CONNECTION_NAME> - If no connection found → inform the user that no
connection is configured in Scalekit and stop.<PROVIDER> - If connections exist but none are COMPLETED → inform the user of the connection
(s) found and tell them the connection configuration is not completed. Ask them to complete setup in the Scalekit Dashboard and stop.key_id - If multiple COMPLETED connections found → the first one is selected automatically (a warning is shown).
Step 2 — Check & Authorize
Run
--generate-link for the connection. The tool automatically detects the connection type (OAuth vs non-OAuth) and applies the correct auth flow:
uv run tool_exec.py --generate-link \ --connection-name <CONNECTION_NAME>
OAuth connections:
- If already ACTIVE → proceed to Step 3.
- If not active → a magic link is generated. Present it to the user, wait for them to complete the flow, then proceed to Step 3.
Non-OAuth connections (BEARER, BASIC, API Key, etc.):
- If account not found → stop. Tell the user: "Please create and configure the
connection in the Scalekit Dashboard."<CONNECTION_NAME> - If account exists but not active → stop. Tell the user: "Please activate the
connection in the Scalekit Dashboard."<CONNECTION_NAME> - If ACTIVE → proceed to Step 3.
Never use
in the execution flow — that is only for inspecting raw OAuth tokens and does not work for non-OAuth connections.--get-authorization
Step 3 — Discover Available Tools
Fetch the list of tools available for the provider:
uv run tool_exec.py --get-tool --provider <PROVIDER>
- Look for a tool that matches the user's intent (e.g.
for reading a page).notion_page_get - If a matching tool exists → go to Step 3b.
- If no matching tool exists → go to Step 5 (proxy fallback).
Step 3b — Fetch Tool Schema (mandatory before executing)
Always fetch the schema of the matched tool before constructing the input. This tells you the exact parameter names, types, required vs optional fields, and valid enum values:
uv run tool_exec.py --get-tool --tool-name <TOOL_NAME>
- Read the
from the response — use only the parameter names defined there.input_schema.properties - Note which fields are in
— these must always be included inrequired
.--tool-input - Use
anddescription
to understand what each field expects.display_properties - Never guess parameter names — always derive them from the schema.
Step 4 — Execute the Tool
Construct the tool input using only parameters from the schema fetched in Step 3b, then run:
uv run tool_exec.py --execute-tool \ --tool-name <TOOL_NAME> \ --connection-name <CONNECTION_NAME> \ --tool-input '<JSON_INPUT>'
Return the result to the user.
Step 5 — Proxy Fallback (only if no tool exists)
If no Scalekit tool covers the required action, attempt a proxied HTTP request directly to the provider's API:
uv run tool_exec.py --proxy-request \ --connection-name <CONNECTION_NAME> \ --path <API_PATH> \ --method <GET|POST|PUT|DELETE> \ --query-params '<JSON>' \ # optional --body '<JSON>' # optional
Note: Proxy may be disabled on some environments. If it returns
, inform the user that this action isn't supported by the current Scalekit tool catalog and suggest they request a new tool from Scalekit.TOOL_PROXY_DISABLED
Example: Search LinkedIn (via HarvestAPI)
User: "Find software engineers in San Francisco on LinkedIn"
→--list-connections --provider HARVESTAPI
,key_id: harvestapi-xxxxtype: API_KEY
→ detects API_KEY, checks account → ACTIVE--generate-link --connection-name harvestapi-xxxx
→ finds--get-tool --provider HARVESTAPI
3b.harvestapi_search_people
→ schema shows valid params:--get-tool --tool-name harvestapi_search_people
,first_names
,last_names
,search
,locations
, etc.current_job_titles
→ returns matching LinkedIn profiles--execute-tool --tool-name harvestapi_search_people --connection-name harvestapi-xxxx --tool-input '{"first_names": "John", "locations": "San Francisco", "current_job_titles": "Software Engineer"}'
Any LinkedIn-related request (profiles, jobs, companies, posts, people search, ads, groups) → use provider
.HARVESTAPI
Example: Search the web with Exa (API Key connection)
User: "Search for latest AI news using Exa"
→--list-connections --provider EXA
,key_id: exatype: API_KEY
→ detects API_KEY, checks account → ACTIVE--generate-link --connection-name exa
→ finds--get-tool --provider EXA
3b.exa_search
→ schema shows--get-tool --tool-name exa_search
(required),query
,num_results
, etc.type
→ returns search results--execute-tool --tool-name exa_search --connection-name exa --tool-input '{"query": "latest AI news"}'
Example: Read a Notion Page (OAuth connection)
User: "Read my Notion page https://notion.so/..."
→--list-connections --provider NOTION
,key_id: notion-ijIQedmJtype: OAUTH
→ detects OAuth, already ACTIVE--generate-link --connection-name notion-ijIQedmJ
→ finds--get-tool --provider NOTION
3b.notion_page_get
→ schema shows--get-tool --tool-name notion_page_get
(required)page_id
→ returns page metadata--execute-tool --tool-name notion_page_get --connection-name notion-ijIQedmJ --tool-input '{"page_id": "..."}'
Example: Action Not Yet in Scalekit
User: "Fetch the blocks of a Notion page"
→--list-connections --provider NOTIONkey_id: notion-ijIQedmJ
→ ACTIVE--generate-link --connection-name notion-ijIQedmJ
→ no--get-tool --provider NOTION
tool foundnotion_blocks_fetch
→ fallback attempt--proxy-request --path "/blocks/<page_id>/children"- If proxy disabled → inform user the action isn't available yet
File Uploads & Downloads
Some providers do not have Scalekit tools for file operations. Use
--proxy-request with --input-file (upload) or direct S3/CDN URL download (download). Provider-specific flows are documented below.
⚠️ Proxy token expiry:
passes the stored OAuth access token directly to the provider. If the token has expired, the provider will return--proxy-request. Unlike401 Unauthorizedwhich auto-refreshes tokens, the proxy does not. If you get a 401, the token needs to be refreshed — re-run--execute-toolto check status; if the connection is ACTIVE but proxy still returns 401, the user must re-authorize via a new magic link to obtain a fresh token.--generate-link
Notion
Upload a File to a Notion Page
Notion file uploads are a 3-step process via proxy:
Step 1 — Create an upload object
uv run tool_exec.py --proxy-request \ --connection-name <CONNECTION_NAME> \ --path "/v1/file_uploads" \ --method POST \ --body '{"mode": "single_part"}' \ --headers '{"Notion-Version": "2022-06-28", "Content-Type": "application/json"}'
Returns a
file_upload object with an id and upload_url. The upload is valid for 1 hour.
Step 2 — Send the file
uv run tool_exec.py --proxy-request \ --connection-name <CONNECTION_NAME> \ --path "/v1/file_uploads/<file_upload_id>/send" \ --method POST \ --input-file /path/to/file \ --headers '{"Notion-Version": "2022-06-28"}'
- The file is sent as
. On success,multipart/form-data
becomesstatus
.uploaded - ⚠️ Notion rejects
. If the file extension is not recognized (e.g.application/octet-stream
), copy it to a.md
extension first so the MIME type resolves to.txt
.text/plain
Step 3 — Attach the file block to a page
uv run tool_exec.py --proxy-request \ --connection-name <CONNECTION_NAME> \ --path "/v1/blocks/<page_id>/children" \ --method PATCH \ --body '{ "children": [{ "object": "block", "type": "file", "file": { "type": "file_upload", "file_upload": {"id": "<file_upload_id>"}, "name": "<display_filename>" } }] }' \ --headers '{"Notion-Version": "2022-06-28", "Content-Type": "application/json"}'
Do not use
for file blocks — it does not support thenotion_page_content_appendblock type and will return anfile_upload. Always use the proxy for file attachment.INTERNAL_ERROR
Download a File from a Notion Page
Notion files are stored on S3 with pre-signed URLs that expire in 1 hour. The download is a 2-step process:
Step 1 — Get a fresh pre-signed URL
List the page blocks to find the file block and its current URL:
uv run tool_exec.py --proxy-request \ --connection-name <CONNECTION_NAME> \ --path "/v1/blocks/<page_id>/children" \ --method GET \ --headers '{"Notion-Version": "2022-06-28"}'
Find the block with
"type": "file" — the URL is at file.file.url. Always fetch a fresh URL; never reuse a URL from a previous response as it may be expired.
Step 2 — Download directly from S3
The S3 URL is public (pre-signed) — no Scalekit proxy needed. Download it directly:
import urllib.request urllib.request.urlretrieve("<s3_url>", "/local/path/filename")
Or use
--output-file if going through the proxy:
uv run tool_exec.py --proxy-request \ --connection-name <CONNECTION_NAME> \ --path "/v1/blocks/<block_id>" \ --method GET \ --headers '{"Notion-Version": "2022-06-28"}' \ --output-file /local/path/filename
Note:
saves the raw API response (JSON block object), not the file itself. Use direct S3 download for the actual file content.--output-file
Google Drive
Coming soon
OneDrive / SharePoint
Coming soon
Supported Providers
Any provider configured in Scalekit (Notion, Slack, Gmail, Google Sheets, GitHub, Salesforce, HubSpot, Linear, and 50+ more). Use the provider name in uppercase for
--provider (e.g. NOTION, SLACK, GOOGLE).