Claude-Code-Agent-Monitor session-compare

install
source · Clone the upstream repo
git clone https://github.com/hoangsonww/Claude-Code-Agent-Monitor
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/hoangsonww/Claude-Code-Agent-Monitor "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/ccam-insights/skills/session-compare" ~/.claude/skills/hoangsonww-claude-code-agent-monitor-session-compare && rm -rf "$T"
manifest: plugins/ccam-insights/skills/session-compare/SKILL.md
source content

Session Compare

Compare two Claude Code sessions side-by-side using Agent Monitor data.

Input

The user provides: $ARGUMENTS

This may be:

  • Two session IDs: "abc123 def456"
  • "best vs worst" — compare highest and lowest productivity sessions
  • "latest 2" — compare the two most recent sessions
  • A session ID + "vs average" — compare one session against the baseline

Procedure

  1. Identify sessions to compare:

    • If two IDs given: fetch both from
      http://localhost:4820/api/sessions/{id}
    • If "best vs worst": fetch sessions, score by completion + cost efficiency, pick extremes
    • If "latest 2":
      GET /api/sessions?limit=2
      (default sort: most recently updated first)
    • If "vs average": fetch session + compute averages from last 50 sessions
  2. Gather detailed data for each session:

    • Session metadata:
      GET /api/sessions/{id}
    • Events:
      GET /api/events?session_id={id}
    • Agents:
      GET /api/agents?session_id={id}
    • Cost:
      GET /api/pricing/cost/{id}
  3. Build comparison:

    Overview Comparison

    MetricSession ASession BDifference
    Statuscompletederror
    Modelsonnet-4sonnet-4same
    Duration12m 34s45m 12s+32m 38s
    Total Cost$0.0234$0.1456+522%
    Events45187+315%
    Tools Used812+4
    Error Count07+7
    Agents25+3

    Token Comparison

    Token TypeSession ASession BDifference
    InputNN±N%
    OutputNN±N%
    Cache ReadNN±N%
    Cache WriteNN±N%
    EfficiencyN%N%±N%

    Tool Usage Comparison

    • Tools unique to Session A
    • Tools unique to Session B
    • Shared tools with usage count comparison
    • Error rate per tool in each session

    Timeline Comparison

    • Side-by-side event timeline
    • Where sessions diverged in approach
    • Key decision points that led to different outcomes

    Agent Activity Comparison

    • Agent counts and types
    • Subagent strategy differences
    • Agent success rates
  4. Analysis:

    • Why one session was more efficient/successful than the other
    • Key decisions that made the difference
    • Lessons to apply to future sessions

Output Format

Present as a side-by-side comparison report with:

  • Executive comparison summary (which session was "better" and why)
  • Structured comparison tables with color-coded differences (green = better, red = worse)
  • A "Lessons Learned" section with actionable takeaways
  • Overall winner declaration with justification