Skills aimlapi-safety
Content moderation and safety checks. Instantly classify text or images as safe or unsafe using AI guardrails.
install
source · Clone the upstream repo
git clone https://github.com/openclaw/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/openclaw/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/aimlapihello/aiml-safety" ~/.claude/skills/clawdbot-skills-aimlapi-safety && rm -rf "$T"
manifest:
skills/aimlapihello/aiml-safety/SKILL.mdsource content
AIMLAPI Safety
Overview
Use "AI safety models" (Guard models) to ensure content compliance. Perfect for moderating user input or chatbot responses.
Quick start
export AIMLAPI_API_KEY="sk-..." python scripts/check_safety.py --content "How to make a bomb"
Tasks
Check Text Safety
python scripts/check_safety.py --content "I want to learn about security" --model meta-llama/Llama-Guard-3-8B
Supported Models
(Default)meta-llama/Llama-Guard-3-8B- Other Llama-Guard variants on AIMLAPI.