Claude-code-plugins-plus lindy-observability

install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/lindy-pack/skills/lindy-observability" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-lindy-observability && rm -rf "$T"
manifest: plugins/saas-packs/lindy-pack/skills/lindy-observability/SKILL.md
source content

Lindy Observability

Overview

Monitor Lindy AI agent execution health, task completion rates, step-level failures, trigger frequency, and credit consumption. Lindy provides built-in task history in the dashboard. External observability requires webhook callbacks, the Task Completed trigger, and application-side metrics collection.

Prerequisites

  • Lindy workspace with active agents
  • For external monitoring: webhook receiver + metrics stack (Prometheus/Grafana, Datadog)
  • For alerts: Slack or email integration configured

Key Observability Signals

SignalSourceWhy It Matters
Task completion rateTasks tab / callbackMeasures agent reliability
Task durationTask detail viewTracks performance over time
Step failure rateTask detail (red steps)Identifies broken actions
Credit consumptionBilling dashboardBudget tracking
Trigger frequencyTask count over timeDetects trigger storms
Agent error rateFailed tasks / total tasksOverall health indicator

Instructions

Step 1: Dashboard Monitoring (Built-In)

Lindy's Tasks tab provides per-agent monitoring:

  1. Open agent > Tasks tab
  2. Filter by status: Completed, Failed, In Progress
  3. For failed tasks: click to see which step failed and why
  4. Track patterns: same step failing? same time of day? same trigger type?

Step 2: Task Completed Trigger (Agent-to-Agent Monitoring)

Use Lindy's built-in Task Completed trigger to build an observability agent:

Monitoring Agent:
  Trigger: Task Completed (from Production Support Agent)
  Condition: "Go down this path if the task failed"
    → Action: Slack Send Channel Message to #ops-alerts
      Message: "Support Agent task failed: {{task.error}}"
  Condition: "Go down this path if task duration > 30 seconds"
    → Action: Slack Send Channel Message to #ops-alerts
      Message: "Support Agent slow: {{task.duration}}s"

Step 3: Webhook-Based Metrics Collection

Configure agents to call your metrics endpoint on task completion:

// metrics-collector.ts — Receive agent metrics via HTTP Request action
import express from 'express';
import { Counter, Histogram, Gauge } from 'prom-client';

const app = express();
app.use(express.json());

// Prometheus metrics
const taskCounter = new Counter({
  name: 'lindy_tasks_total',
  help: 'Total Lindy agent tasks',
  labelNames: ['agent', 'status'],
});

const taskDuration = new Histogram({
  name: 'lindy_task_duration_seconds',
  help: 'Lindy task execution duration',
  labelNames: ['agent'],
  buckets: [1, 2, 5, 10, 30, 60, 120],
});

const creditGauge = new Gauge({
  name: 'lindy_credits_consumed',
  help: 'Credits consumed per task',
  labelNames: ['agent'],
});

// Receive metrics from Lindy HTTP Request action
app.post('/lindy/metrics', (req, res) => {
  const auth = req.headers.authorization;
  if (auth !== `Bearer ${process.env.LINDY_WEBHOOK_SECRET}`) {
    return res.status(401).json({ error: 'Unauthorized' });
  }

  const { agent, status, duration, credits } = req.body;

  taskCounter.inc({ agent, status });
  taskDuration.observe({ agent }, duration);
  creditGauge.set({ agent }, credits);

  res.json({ recorded: true });
});

// Prometheus scrape endpoint
app.get('/metrics', async (req, res) => {
  res.set('Content-Type', 'text/plain');
  res.send(await register.metrics());
});

Lindy agent configuration: Add an HTTP Request action as the last step in each monitored agent:

  • URL:
    https://monitoring.yourapp.com/lindy/metrics
  • Method: POST
  • Body (Set Manually):
    {
      "agent": "support-bot",
      "status": "{{task.status}}",
      "duration": "{{task.duration}}",
      "credits": "{{task.credits}}"
    }
    

Step 4: Grafana Dashboard Panels

Key panels for a Lindy monitoring dashboard:

PanelMetricType
Task Success Rate
rate(lindy_tasks_total{status="completed"}[1h])
Percentage gauge
Task Failures
rate(lindy_tasks_total{status="failed"}[1h])
Counter
Duration p50/p95
histogram_quantile(0.95, lindy_task_duration_seconds)
Time series
Credit Burn Rate
rate(lindy_credits_consumed[1h])
Counter
Active AgentsCount of agents with tasks in last 24hStat panel
Trigger FrequencyTasks per hour by agentBar chart

Step 5: Alert Rules

# Prometheus alert rules
groups:
  - name: lindy
    rules:
      - alert: LindyAgentHighFailureRate
        expr: rate(lindy_tasks_total{status="failed"}[30m]) > 0.1
        for: 10m
        labels:
          severity: warning
        annotations:
          summary: "Lindy agent {{ $labels.agent }} failure rate > 10%"

      - alert: LindyAgentDown
        expr: absent(lindy_tasks_total{agent="support-bot"}[1h])
        for: 30m
        labels:
          severity: critical
        annotations:
          summary: "No tasks from support-bot in 1 hour"

      - alert: LindyCreditsBurnRate
        expr: rate(lindy_credits_consumed[1h]) * 720 > 5000
        for: 15m
        labels:
          severity: warning
        annotations:
          summary: "Credit burn rate will exhaust monthly budget"

Step 6: Evals (Built-In Quality Monitoring)

Use Lindy Evals to catch quality regressions:

  1. Click the test tube icon below any agent step
  2. Define scoring criteria (LLM-as-judge):
    Score 1 (pass) if the response is professional, accurate, and under 200 words.
    Score 0 (fail) if the response contains hallucinations or exceeds 200 words.
    
  3. Run evals against historical task data
  4. Track scores over time to detect quality drift

Note: Eval runs consume credits but do NOT execute real actions (safe simulation).

Observability Maturity Levels

LevelWhat You MonitorHow
L0NothingManual dashboard checks
L1Task failuresTask Completed trigger + Slack alerts
L2Success rate + durationHTTP Request action + Prometheus
L3Credit burn + qualityEvals + Grafana dashboards
L4Automated remediationMonitoring agent auto-restarts failed agents

Error Handling

IssueCauseSolution
Metrics endpoint downMonitoring server crashedAlert on scrape failures
Task Completed not firingMonitoring agent pausedCheck monitoring agent is active
Credit burn alert false positiveLegitimate traffic spikeTune alert threshold
Eval scores droppingPrompt drift or model changeReview recent prompt/model changes

Resources

Next Steps

Proceed to

lindy-incident-runbook
for incident response procedures.