Ai-analyst guardrails

Skill: Guardrails Awareness

install
source · Clone the upstream repo
git clone https://github.com/ai-analyst-lab/ai-analyst
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/ai-analyst-lab/ai-analyst "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.claude/skills/guardrails" ~/.claude/skills/ai-analyst-lab-ai-analyst-guardrails && rm -rf "$T"
manifest: .claude/skills/guardrails/skill.md
source content

Skill: Guardrails Awareness

Purpose

Ensure that every success metric is paired with at least one guardrail metric, and that positive findings are checked for trade-offs before being presented as wins.

When to Use

Apply this skill in two situations:

  1. When defining metrics — after using the Metric Spec skill, check whether the metric has a guardrail pair
  2. When reporting positive findings — before presenting any improvement, check whether a related guardrail metric degraded

Instructions

What Are Guardrails?

A guardrail metric is a metric you don't want to degrade while optimizing a success metric. Guardrails protect against winning the metric game while losing the business game.

SUCCESS METRIC:  The metric you're trying to improve
GUARDRAIL:       The metric that must not get worse

The rule: Never celebrate an improvement on a success metric without checking its guardrail(s). An improvement with a degraded guardrail is a trade-off, not a win.

Common Guardrail Pairs

Success MetricGuardrail(s)Why
Conversion rateAverage order value, Return rateAggressive discounts inflate conversion but erode margin and invite returns
Signup rateActivation rate, 7-day retentionLowering the signup bar brings in unqualified users who churn immediately
Revenue per userUser satisfaction (NPS/CSAT), Support ticket volumeMonetization pressure degrades experience
Feature adoptionCore workflow completion, Session durationForcing feature usage may disrupt existing workflows
Time to complete (speed)Error rate, Quality scoreRushing degrades accuracy
Cost reductionQuality, Customer satisfactionCutting costs can degrade service
Engagement (DAU, sessions)Revenue per user, Churn rateEngagement tricks (notifications, dark patterns) don't translate to value
Support resolution timeCustomer satisfaction, Reopen rateFast close ≠ good close if tickets reopen

How to Apply

When Defining Metrics

After specifying a metric using the Metric Spec skill, add a guardrail section:

### Guardrails
| Guardrail Metric | Acceptable Range | Check Frequency |
|-----------------|-----------------|-----------------|
| [guardrail 1] | [must stay above X / must not increase by >Y%] | [same cadence as success metric] |
| [guardrail 2] | [threshold] | [cadence] |

Rules for selecting guardrails:

  1. At least one guardrail per success metric
  2. The guardrail should measure a different dimension of value (e.g., if success is quantity, guardrail is quality)
  3. The guardrail must be measurable with available data
  4. If no obvious guardrail exists, use customer satisfaction or support ticket volume as defaults

When Reporting Positive Findings

Before presenting any statement like "[metric] improved by X%," run this check:

GUARDRAIL CHECK
□ Identified guardrail metric(s) for [success metric]
□ Computed guardrail metric(s) over the same time period
□ Compared guardrail to baseline / acceptable range
□ Result: [CLEAR / TRADE-OFF / DEGRADED]

Verdicts:

VerdictGuardrail StatusHow to Present
CLEARGuardrail stable or improvedPresent the improvement as a win
TRADE-OFFGuardrail slightly degraded (<10% relative)Present the improvement AND the trade-off: "Conversion improved 15%, but AOV decreased 5%. Net revenue impact is +8%."
DEGRADEDGuardrail significantly degraded (>10% relative)Do NOT present as a win. Present as: "Conversion improved 15%, but return rate doubled. The net impact may be negative — further investigation needed."

Guardrail Escalation

When a guardrail is degraded:

  1. Quantify both sides — compute the success metric gain AND the guardrail loss in the same units (usually dollars or users)
  2. Compute net impact — is the gain larger than the loss?
  3. Flag uncertainty — guardrail degradation often has delayed effects (e.g., returns take weeks to materialize, churn shows up months later). Note this.
  4. Recommend investigation — "The conversion improvement looks positive, but the return rate increase warrants investigation before concluding this is a net win."

Output Format

When guardrails are checked, add this section to the analysis report:

## Guardrail Check

| Success Metric | Change | Guardrail | Change | Verdict |
|---------------|--------|-----------|--------|---------|
| [metric] | +X% | [guardrail 1] | [no change / +Y% / -Z%] | CLEAR / TRADE-OFF / DEGRADED |
| | | [guardrail 2] | [change] | [verdict] |

**Net assessment:** [The improvement is real / The improvement comes with a trade-off / The improvement may be net negative]

Examples

Example 1: Clear Win

## Guardrail Check

| Success Metric | Change | Guardrail | Change | Verdict |
|---------------|--------|-----------|--------|---------|
| Checkout conversion | +12% | Avg order value | +2% (stable) | CLEAR |
| | | Return rate | -1% (improved) | CLEAR |

**Net assessment:** The conversion improvement is a genuine win. Both guardrails are stable or improving.

Example 2: Trade-Off

## Guardrail Check

| Success Metric | Change | Guardrail | Change | Verdict |
|---------------|--------|-----------|--------|---------|
| Signup rate | +25% | 7-day activation | -8% | TRADE-OFF |
| | | 30-day retention | -3% (within normal range) | CLEAR |

**Net assessment:** The signup rate improvement is partially offset by lower activation. The new signups are less qualified. Recommend segmenting the new signups to identify which acquisition channel is bringing lower-quality users.

Example 3: Degraded Guardrail

## Guardrail Check

| Success Metric | Change | Guardrail | Change | Verdict |
|---------------|--------|-----------|--------|---------|
| Resolution time | -40% (faster) | Reopen rate | +85% | DEGRADED |
| | | CSAT score | -22% | DEGRADED |

**Net assessment:** The faster resolution time is coming at the cost of quality. Tickets are being closed prematurely and reopened, and customer satisfaction has dropped significantly. This is NOT a net improvement. Recommend reverting the process change and investigating sustainable ways to reduce resolution time.

Anti-Patterns

  1. Never report a success metric improvement without checking guardrails — a 20% conversion lift with a 30% return rate increase is not a win
  2. Never define a metric without at least one guardrail — every metric can be gamed; guardrails prevent it
  3. Never dismiss a small guardrail degradation — small degradations compound and may have delayed effects (churn shows up months later)
  4. Never use the same metric as both success and guardrail — they must measure different dimensions of value
  5. Never skip the net impact calculation — "conversion up, returns up" is not actionable without knowing which effect is larger