Claude-skill-registry guardrail-scorecard
Framework for defining, monitoring, and enforcing guardrail metrics across
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/guardrail-scorecard" ~/.claude/skills/majiayu000-claude-skill-registry-guardrail-scorecard && rm -rf "$T"
manifest:
skills/data/guardrail-scorecard/SKILL.mdsource content
Guardrail Scorecard Skill
When to Use
- Setting non-negotiable metrics (stability, churn, latency, compliance) before launching tests.
- Monitoring live experiments to ensure guardrails stay within thresholds.
- Reporting guardrail status in launch packets and post-test readouts.
Framework
- Metric Inventory – list guardrail metrics, owners, data sources, refresh cadence.
- Threshold Matrix – define warning vs critical bands per metric / persona / region.
- Alerting & Escalation – map notification channels, DRI, and decision timelines.
- Exception Handling – document when guardrail overrides are acceptable and required approvals.
- Retrospective Loop – log breaches, mitigations, and rule updates for future tests.
Templates
- Guardrail register (metric, threshold, owner, alert channel).
- Live monitoring dashboard layout.
- Exception memo structure for approvals.
Tips
- Tie guardrails to downstream systems (billing, support) to catch second-order impacts.
- Keep thresholds dynamic for seasonality but document logic.
- Pair with
to ensure readiness before flipping flags.launch-experiment