GAAI-framework success-metrics-evaluation

Evaluate delivery outcomes against defined success metrics and acceptance goals. Activate after Delivery to verify that delivered work creates real business and technical impact, not just output.

install
source · Clone the upstream repo
git clone https://github.com/Fr-e-d/GAAI-framework
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/Fr-e-d/GAAI-framework "$T" && mkdir -p ~/.claude/skills && cp -r "$T/.gaai/core/skills/cross/success-metrics-evaluation" ~/.claude/skills/fr-e-d-gaai-framework-success-metrics-evaluation && rm -rf "$T"
manifest: .gaai/core/skills/cross/success-metrics-evaluation/SKILL.md
source content

Success Metrics Evaluation

Purpose / When to Activate

Activate after Delivery to verify outcomes, not just outputs. Prevents "output without outcome."

Use when:

  • Success metrics were defined in the PRD or Story
  • Delivery is complete and runtime data is available
  • Objective quality gates are required

Process

  1. Map each Story to its defined success metrics
  2. Measure artefacts and runtime results against targets
  3. Detect underperformance and partial success
  4. Generate actionable improvement insights

Outputs

  • Story-by-story KPI report
  • Metric vs target comparisons
  • Identified gaps with root signals
  • Improvement suggestions linked to backlog items

Quality Checks

  • Each metric is measured against a defined target
  • Gaps are identified with root cause signals
  • Recommendations are linked to specific backlog items
  • No invented metrics — only those defined in artefacts

Non-Goals

This skill must NOT:

  • Redefine success metrics post-delivery
  • Make product decisions about gaps
  • Substitute for
    qa-review

Ensures delivery creates real impact. Makes scaling predictable.