Skills Skills Audit
Security audit + append-only logging + monitoring for OpenClaw skills (file-level diff, baseline approval, SHA-256 integrity).
git clone https://github.com/openclaw/skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/buffedon/test2894-0406" ~/.claude/skills/openclaw-skills-skills-audit-734d81 && rm -rf "$T"
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.openclaw/skills && cp -r "$T/skills/buffedon/test2894-0406" ~/.openclaw/skills/openclaw-skills-skills-audit-734d81 && rm -rf "$T"
skills/buffedon/test2894-0406/SKILL.mdSkills Audit (skills-audit)
A security-oriented skill for managing OpenClaw skills safely, with five core capabilities:
- Threat scanning (static analysis)
- Append-only audit logs (local NDJSON)
- Skills monitoring & notifications (push alerts on changes)
- File-level diff + content diff (git snapshots)
- Baseline approval mechanism (approved skills don't repeat-alert)
- Semantic analysis (dangerous functions + capability analysis)
This skill performs static analysis only — it never executes skill code.
Requirements
- Python ≥ 3.9, standard library only (no third-party dependencies)
- git (for content diff snapshots)
- See
for detailsscripts/requirements.txt
Core Capabilities
1) Threat Scanning (Static Risk Analysis)
skills_audit.py performs static inspection of installed skill directories. If a QianXin token is configured, it also queries QianXin SafeSkill by the stable MD5 of the whole workspace/skills bundle instead of uploading the bundle itself:
- Network indicators: URLs/domains,
usagecurl/wget/requests - Dangerous commands:
,curl|sh
,wget|bash
, dynamic exec, base64 pipeseval - Suspicious behavior: persistence (cron/systemd), sensitive paths (
,~/.ssh
,~/.aws
)/etc - Optional QianXin intel: stable MD5 lookup for the full
bundle using a user-supplied tokenworkspace/skills
Output fields:
:risk.levellow | medium | high | extreme
:risk.decisionallow | allow_with_caution | require_sandbox | deny
: evidence (file + snippet)risk.risk_signals[]
: extracted domainsrisk.network.domains[]
:risk.source
orlocalqianxin-md5
QianXin config:
- Config file:
config/intelligent.json - Defaults to
enabled: false
defaults to emptytoken- Users can enable it after download by filling in their own token and setting
toenabledtrue - If disabled, token is empty, or the query fails, the scan automatically falls back to local static analysis
2) Audit Logging (Append-only NDJSON)
All detections are appended as NDJSON to:
~/.openclaw/skills-audit/logs.ndjson
State snapshot for diff:
~/.openclaw/skills-audit/state.json
Schema defined by
log-template.json. Key points:
: SHA-256 of SKILL.md (integrity field)sha256
: git commit info + per-file statdiff
: file-level added/removed/changed listsfile_changes
: baseline approval statusapproved
3) Skills Monitoring & Push Notifications
Periodic monitoring of
workspace/skills for additions, changes, and removals.
- No changes → no output
- Changes detected → one notification
- Baseline-approved unchanged skills are excluded from notifications
Notification template:
templates/notify.txt (see templates/README.md for customization).
4) File-level Diff + Content Diff (Git Snapshots)
Each scan snapshots the skills directory into a local git repo (
~/.openclaw/skills-audit/snapshots/):
- Each scan = one git commit
- Change detection via
git diff HEAD~1 HEAD - Notifications include per-file change summaries (+N -N lines)
Tiered display:
- ≤ 5 changed files: show all with +N -N
- 6–20: first 3 + "X more omitted"
- > 20: first 3 + omitted + ⚠️ large-scale change warning
- > 8 skills changed: high-risk expanded, low-risk compressed
View full diff:
git -C ~/.openclaw/skills-audit/snapshots diff HEAD~1 HEAD git -C ~/.openclaw/skills-audit/snapshots diff HEAD~1 HEAD -- skills/<skill-name>/ git -C ~/.openclaw/skills-audit/snapshots log --oneline
6) Semantic Analysis (Dangerous Functions + Capability Analysis)
Each scan now also produces a
semantic_analysis field in the audit log:
- Dangerous function analysis: detects patterns such as
,eval
,exec
,os.system
withsubprocess
,shell=True
, andcurl|shwget|bash - Capability analysis: infers whether the skill has network, filesystem, process execution, cron/scheduler, git, or config-handling capabilities
- Combined result: evaluates execution-capability risk and malicious-intent risk separately, with semantic intent as the primary decision dimension, then emits
level / decision / reason
Notes:
- The current implementation is a local LLM-style heuristic analysis, with no external model dependency
- This is a mandatory rule: every added or changed skill must go through semantic analysis during
skills_audit.py scan - If
is missing or cannot produce a result, the scan should fail instead of silently skipping itsemantic_analysis - To reduce self-audit false positives, low-signal files such as rule configs, log templates, skill docs, and
are excluded from high-weight dangerous-function scoringconfig/semantic-patterns.json - Risk-rule scanning now distinguishes hit contexts:
real_execution / rule_sample / doc_example / config_sample - For
itself, semantic detection and risk scanning both prioritize real execution paths instead of treating the pattern-string tables as dangerous execution by defaultskills_audit.py - Semantic analysis now separates execution capability from malicious intent, so normal engineering capabilities do not automatically imply a hostile result
- Controlled local calls such as
with fixed argv are treated as engineering capability rather than automatic malicious intentsubprocess.run([...], shell=False) - The field is designed to be forward-compatible with a real LLM-backed semantic scanner later
5) Baseline Approval Mechanism
Manage approved skills via
~/.openclaw/skills-audit/baseline.json:
- Approved skills (matching tree_sha256) won't trigger repeated alerts
- Any file change automatically breaks approval → re-triggers risk detection
python3 {baseDir}/scripts/skills_audit.py approve --skill weather --workspace <workspace> python3 {baseDir}/scripts/skills_audit.py approve --all --workspace <workspace> python3 {baseDir}/scripts/skills_audit.py baseline --list python3 {baseDir}/scripts/skills_audit.py baseline --revoke --skill weather
Initialization and Manual Monitoring Setup (Agent Directive)
When the Agent first loads this skill, it may perform local initialization, but it must not automatically create cron jobs. Instead, it should guide the user to create monitoring explicitly:
- Check if
exists~/.openclaw/skills-audit/state.json - If not, run initialization:
python3 {baseDir}/scripts/skills_audit.py init --workspace <workspace> - Check if a
cron job already exists (skills-audit-watch
)openclaw cron list - If not:
- Do not auto-create cron
- Tell the user they can create it manually
- Provide a suggested command example (to be executed by the user, or only after explicit user approval):
openclaw cron add \ --name "skills-audit-watch" \ --cron "*/1 * * * *" \ --tz "Asia/Shanghai" \ --session isolated \ --message "python3 {baseDir}/scripts/skills_watch_and_notify.py --workspace <workspace>" \ --announce \ --channel <current_channel> \ --to <current_user_id> - Remind the user to verify the push target before enabling it, especially if raw diffs or large change details may be sent externally
- Do not default to pushing large raw diffs to external channels; prefer a concise summary first, with details on demand
Design principle:
handles scanning, logging, and notification text generation. Scheduling and delivery should be user-directed rather than auto-created by default.skills-audit
Viewing Change Details (Agent Mandatory Flow)
⚠️ Mandatory rule: When a user asks about skill change details, the Agent MUST use the
command. DO NOT interpret rawshowoutput or generate free-form summaries.git diff
Trigger phrases (user may say):
- "what changed" / "show diff" / "what's different" / "change details"
- "具体改了什么" / "哪里变了" / "看一下变更"
- Any request for diff / change / modification details
Fixed execution flow (cannot be skipped):
- If user mentions a specific skill:
python3 {baseDir}/scripts/skills_audit.py show --skill <skill-name> - If no specific skill mentioned:
python3 {baseDir}/scripts/skills_audit.py show - Send the full output of
directly to the user — do not modify, truncate, or reformatshow - For older history, use
:--commit-rangepython3 {baseDir}/scripts/skills_audit.py show --commit-range HEAD~3..HEAD~2
Prohibited behaviors:
- ❌ Running
and summarizing in your own wordsgit diff - ❌ Omitting any part of the
outputshow - ❌ Wrapping output in markdown code blocks and reformatting
- ✅ Send raw text output from
as-isshow
Manual Usage
Initialize
python3 {baseDir}/scripts/skills_audit.py init --workspace /root/.openclaw/workspace
Manual Scan
python3 {baseDir}/scripts/skills_audit.py scan --workspace /root/.openclaw/workspace --who user --channel local
Local Notification Test
python3 {baseDir}/scripts/skills_watch_and_notify.py --workspace /root/.openclaw/workspace
Safety Notes
- Static analysis only: never execute unknown skill code during audit.
- When
isrisk.level
/high
, require human review or sandbox.extreme - Prefer OpenClaw
/cron add
for scheduling.cron edit - Integrity checks use SHA-256.