Claude-skill-registry agent-evaluator
Deterministic custom subagent selection helper. Use when you need a reproducible, auditable decision on which custom subagents to activate for a user query (runs scripts/agent_evaluator.py).
install
source · Clone the upstream repo
git clone https://github.com/majiayu000/claude-skill-registry
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/majiayu000/claude-skill-registry "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/data/agent-evaluator" ~/.claude/skills/majiayu000-claude-skill-registry-agent-evaluator && rm -rf "$T"
manifest:
skills/data/agent-evaluator/SKILL.mdsource content
Agent Evaluator
Evaluate a user query against the workspace's available subagents and return a JSON decision payload (activated/required/suggested agents and scoring).
Mechanism
Run the evaluator script (located in scripts folder relative this skill file) with the user query as an argument.:
python scripts/agent_evaluator.py "YOUR_QUERY_HERE"
Optional: include a contextual file path as the second argument:
python scripts/agent_evaluator.py "YOUR_QUERY_HERE" "path/to/file.ext"
Output
- Writes a JSON object to stdout.
- Key fields include:
activated_agentsrequired_agentssuggested_agents
(per-agent score + reasoning)evaluations
Examples
Evaluate a query:
python scripts/agent_evaluator.py "Please help refine our custom instruction file"
Evaluate a query with file context:
python scripts/agent_evaluator.py "Update this instruction" "instructions/agent-forced-eval.instructions.md"