Skills secret-safe
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/brycexbt/secret-safe" ~/.claude/skills/openclaw-skills-secret-safe && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/brycexbt/secret-safe" ~/.openclaw/skills/openclaw-skills-secret-safe && rm -rf "$T"
skills/brycexbt/secret-safe/SKILL.mdSecret-Safe: Secure Credential Handling for Agent Skills
Why this skill exists: Snyk researchers found that 7.1% of all ClawHub skills instruct agents to handle API keys through the LLM context — making every secret an active exfiltration channel. This skill teaches the correct pattern.
The Core Rule
A secret must never appear in:
- The LLM prompt or system context
- Claude's response or reasoning
- Logs, session exports, or
history files.jsonl - File artifacts created by the agent
- Error messages echoed back to the user
A secret must only flow through:
(injected by OpenClaw before the agent turn)process.env- The shell environment of a subprocess the agent spawns
- A secrets manager CLI (read at subprocess level, not piped back into context)
Pattern 1: Environment Injection (Preferred)
This is OpenClaw's native, secure path. Use it for any skill that needs an API key.
In SKILL.md
frontmatter
SKILL.md--- name: my-service-skill description: Interact with MyService API. metadata: {"openclaw": {"requires": {"env": ["MY_SERVICE_API_KEY"]}, "primaryEnv": "MY_SERVICE_API_KEY"}} ---
The
requires.env gate ensures the skill will not load if the key isn't present — no silent failures, no prompting the user to paste a key mid-conversation.
The
primaryEnv field links to skills.entries.<n>.apiKey in openclaw.json, so the user configures it once in their config file, never in chat.
In skill instructions
## Authentication The API key is available as `$MY_SERVICE_API_KEY` in the shell environment. Pass it to CLI tools or curl as an environment variable — never echo it or include it in any output returned to the user.
Example safe curl invocation (instruct the agent to do this)
# CORRECT — key stays in environment, never in command string visible to LLM MY_SERVICE_API_KEY="$MY_SERVICE_API_KEY" curl -s \ -H "Authorization: Bearer $MY_SERVICE_API_KEY" \ https://api.myservice.com/v1/data
Never instruct the agent to do this:
# WRONG — key is visible in LLM context, command history, and logs curl -H "Authorization: Bearer sk-abc123realkeyhere" https://api.myservice.com/
Pattern 2: Secrets Manager Integration
For production setups or team environments, read secrets from a manager at subprocess level.
Supported managers
| Manager | CLI | Env var pattern |
|---|---|---|
| macOS Keychain | | N/A |
| 1Password CLI | | |
| Doppler | | |
| HashiCorp Vault | | |
| Bitwarden CLI | | |
Safe shell wrapper pattern
Create a
scripts/run-with-secret.sh in your skill:
#!/usr/bin/env bash # Fetches the secret at subprocess level — never echoes to stdout SECRET=$(security find-generic-password -s "my-service-api-key" -w 2>/dev/null) if [ -z "$SECRET" ]; then echo "ERROR: Secret 'my-service-api-key' not found in keychain." >&2 exit 1 fi export MY_SERVICE_API_KEY="$SECRET" exec "$@"
The agent runs
bash {baseDir}/scripts/run-with-secret.sh <actual-command> — the secret is fetched and injected entirely outside the LLM's view.
Pattern 3: User Setup Flow (first-run)
If the user hasn't configured a key yet, guide them through setup without asking for the key in chat.
Correct setup prompt to give the user:
To use this skill, add your API key to ~/.openclaw/openclaw.json: skills: entries: my-service: apiKey: "your-key-here" Or set it as an environment variable before starting OpenClaw: export MY_SERVICE_API_KEY="your-key-here" Do NOT paste your key into this chat — it will be logged.
Incorrect (never do this):
Please share your API key so I can help you set it up.
Auditing Another Skill for Leaks
When asked to review a
SKILL.md for credential safety, check for these patterns:
🔴 Critical — Must Fix
| Pattern | Why it's dangerous |
|---|---|
| Instruction to paste key into chat | Key goes into LLM context + session logs |
or in instructions | Output captured in context |
| Key interpolated into a string returned to user | Exposed in response artifact |
or reading raw env files | Entire env dumped into context |
| Key stored in a file the agent creates | Creates a static credential artifact |
| Instructions tell agent to "remember" the key | Key persists across context window |
🟡 Warning — Should Fix
| Pattern | Risk |
|---|---|
No gate in frontmatter | Skill silently fails or user is prompted |
| Logging command output without filtering | May capture keys in error messages |
Using in shell scripts | Echoes all commands including key values |
| Passing key as a positional argument | Visible in on the host |
🟢 Safe Patterns
in frontmatterrequires.env- Key accessed only as
in shell, never echoed$ENV_VAR - Subprocess scripts that fetch and inject without returning to context
- Error messages that say "key not found" without printing the value
- Output filtered through
/sed
before returning to agentgrep
Self-Check Before Publishing a Skill
Run through this checklist before putting any skill on ClawHub:
- Does the skill ever ask the user to paste a secret into the conversation?
- Does the skill ever
,echo
,print
, or return a secret value?log - Does the skill read a
file and dump its contents?.env - Does the skill store a secret in a file artifact?
- Are all API key references gated with
in frontmatter?requires.env - Do error messages avoid reflecting credential values?
- Does any shell script use
(which would expose key values)?set -x - Would running
pass?clawhub audit {skill-name}
If any box is unchecked, do not publish until fixed.
Quick Reference: Safe vs Unsafe Patterns
# UNSAFE — never write instructions like these: "Ask the user for their OpenAI API key and use it to call the API." "Set the Authorization header to Bearer {user_api_key}." "Store the API key in a variable and use it throughout the session." # SAFE — write instructions like these: "The API key is injected as $OPENAI_API_KEY via environment — use it directly." "Run: OPENAI_API_KEY=$OPENAI_API_KEY curl ..." "If $OPENAI_API_KEY is not set, print an error and exit — do not ask the user."
Reference Files
— Full worked examples for popular APIs (OpenAI, Anthropic, GitHub, Stripe, Slack)references/env-injection-examples.md
— Printable audit checklist for skill authors and reviewersreferences/audit-checklist.md