Claude-skill-registry anthropic-evaluations

This skill should be used when the user asks to "create evals", "evaluate an agent", "build evaluation suite", or mentions agent testing, graders, or benchmarks. Also suggest when building coding agents, conversational agents, or research agents that need quality assurance.

install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/anthropic-evaluations" ~/.claude/skills/majiayu000-claude-skill-registry-anthropic-evaluations && rm -rf "$T"
manifest: skills/data/anthropic-evaluations/SKILL.md
source content

Anthropic Evaluations

Build rigorous evaluations for AI agents using Anthropic's proven patterns.

Quick Reference

You MUST read the reference files for detailed guidance:

YAML Templates:

Annotated Examples:

Core Definitions

TermDefinition
TaskSingle test with defined inputs and success criteria
TrialOne attempt at a task (run multiple for consistency)
GraderLogic that scores agent performance; tasks can have multiple
TranscriptComplete record of a trial (outputs, tool calls, reasoning)
OutcomeFinal state in environment (not just what agent said)
Evaluation harnessInfrastructure that runs evals end-to-end
Agent harnessSystem enabling model to act as agent (scaffold)
Evaluation suiteCollection of tasks measuring specific capabilities

Grader Types (Quick Reference)

TypeMethodsBest For
Code-basedString match, unit tests, static analysis, state checksFast, cheap, objective verification
Model-basedRubric scoring, assertions, pairwise comparisonNuanced, open-ended tasks
HumanSME review, A/B testing, spot-check samplingGold standard calibration

See Grader Types for detailed comparison.

Capability vs Regression Evals

TypeQuestionTarget Pass Rate
Capability"What can this agent do well?"Start low, hill-climb
Regression"Does it still handle what it used to?"Near 100%

Capability evals with high pass rates "graduate" to regression suites.

Non-Determinism Metrics

MetricMeasuresUse When
pass@kAt least 1 success in k attemptsOne success matters (coding)
pass^kAll k attempts succeedConsistency essential (customer-facing)

Example: 75% per-trial success rate

  • pass@3 ≈ 98% (likely to get at least one)
  • pass^3 ≈ 42% (0.75³ all succeed)

Tracked Metrics

tracked_metrics:
  - type: transcript
    metrics: [n_turns, n_toolcalls, n_total_tokens]
  - type: latency
    metrics: [time_to_first_token, output_tokens_per_sec, time_to_last_token]

Attribution

Based on Demystifying evals for AI agents by Anthropic (January 2026).