Claude-skill-registry agent-guardrails
Stop AI agents from secretly bypassing your rules. Mechanical enforcement with git hooks, secret detection, deployment verification, and import registries. Born from real production incidents: server crashes, token leaks, code rewrites. Works with Claude Code, Clawdbot, Cursor. Install once, enforce forever.
git clone https://github.com/majiayu000/claude-skill-registry
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/agent-guardrails" ~/.claude/skills/majiayu000-claude-skill-registry-agent-guardrails && rm -rf "$T"
skills/data/agent-guardrails/SKILL.mdAgent Guardrails
Mechanical enforcement for AI agent project standards. Rules in markdown are suggestions. Code hooks are laws.
Quick Start
cd your-project/ bash /path/to/agent-guardrails/scripts/install.sh
This installs the git pre-commit hook, creates a registry template, and copies check scripts into your project.
Enforcement Hierarchy
- Code hooks (git pre-commit, pre/post-creation checks) — 100% reliable
- Architectural constraints (registries, import enforcement) — 95% reliable
- Self-verification loops (agent checks own work) — 80% reliable
- Prompt rules (AGENTS.md, system prompts) — 60-70% reliable
- Markdown rules — 40-50% reliable, degrades with context length
Tools Provided
Scripts
| Script | When to Run | What It Does |
|---|---|---|
| Once per project | Installs hooks and scaffolding |
| Before creating new files | Lists existing modules/functions to prevent reimplementation |
| After creating/editing files | Detects duplicates, missing imports, bypass patterns |
| Before commits / on demand | Scans for hardcoded tokens, keys, passwords |
| When setting up deployment verification | Creates .deployment-check.sh, checklist, and git hook template |
| When setting up skill update automation | Creates detection, auto-commit, and git hook for skill updates |
Assets
| Asset | Purpose |
|---|---|
| Ready-to-install git hook blocking bypass patterns and secrets |
| Template for project module registries |
References
| File | Contents |
|---|---|
| Research on why code > prompts for enforcement |
| Template AGENTS.md with mechanical enforcement rules |
| Full guide on preventing deployment gaps |
| Meta-enforcement: automatic skill update feedback loop |
| Chinese translation of this document |
Usage Workflow
Setting up a new project
bash scripts/install.sh /path/to/project
Before creating any new .py file
bash scripts/pre-create-check.sh /path/to/project
Review the output. If existing functions cover your needs, import them.
After creating/editing a .py file
bash scripts/post-create-validate.sh /path/to/new_file.py
Fix any warnings before proceeding.
Setting up deployment verification
bash scripts/create-deployment-check.sh /path/to/project
This creates:
- Automated verification script.deployment-check.sh
- Full deployment workflowDEPLOYMENT-CHECKLIST.md
- Git hook template.git-hooks/pre-commit-deployment
Then customize:
- Add tests to
for your integration points.deployment-check.sh - Document your flow in
DEPLOYMENT-CHECKLIST.md - Install the git hook
See
references/deployment-verification-guide.md for full guide.
Adding to AGENTS.md
Copy the template from
references/agents-md-template.md and adapt to your project.
中文文档 / Chinese Documentation
See
references/SKILL_CN.md for the full Chinese translation of this skill.
Common Agent Failure Modes
1. Reimplementation (Bypass Pattern)
Symptom: Agent creates "quick version" instead of importing validated code. Enforcement:
pre-create-check.sh + post-create-validate.sh + git hook
2. Hardcoded Secrets
Symptom: Tokens/keys in code instead of env vars. Enforcement:
check-secrets.sh + git hook
3. Deployment Gap
Symptom: Built feature but forgot to wire it into production. Users don't receive benefit. Example: Updated
notify.py but cron still calls old version.
Enforcement: .deployment-check.sh + git hook
This is the hardest to catch because:
- Code runs fine when tested manually
- Agent marks task "done" after writing code
- Problem only surfaces when user complains
Solution: Mechanical end-to-end verification before allowing "done."
4. Skill Update Gap (META - NEW)
Symptom: Built enforcement improvement in project but forgot to update the skill itself. Example: Created deployment verification for Project A, but other projects don't benefit because skill wasn't updated. Enforcement:
install-skill-feedback-loop.sh → automatic detection + semi-automatic commit
This is a meta-failure mode because:
- It's about enforcement improvements themselves
- Without fix: improvements stay siloed
- With fix: knowledge compounds automatically
Solution: Automatic detection of enforcement improvements with task creation and semi-automatic commits.
Key Principle
Don't add more markdown rules. Add mechanical enforcement. If an agent keeps bypassing a standard, don't write a stronger rule — write a hook that blocks it.
Corollary: If an agent keeps forgetting integration, don't remind it — make it mechanically verify before commit.