Claude-code-plugins-plus-skills lindy-performance-tuning

install
source · Clone the upstream repo
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/lindy-pack/skills/lindy-performance-tuning" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-skills-lindy-performance-tuning && rm -rf "$T"
manifest: plugins/saas-packs/lindy-pack/skills/lindy-performance-tuning/SKILL.md
source content

Lindy Performance Tuning

Overview

Lindy agents execute as multi-step workflows where each step (LLM call, action execution, API call, condition evaluation) adds latency and credit cost. Optimization targets: fewer steps, smaller models, faster actions, tighter prompts.

Prerequisites

  • Lindy workspace with active agents
  • Access to agent Tasks tab (view step-by-step execution history)
  • Understanding of agent workflow structure

Instructions

Step 1: Profile Agent Execution

In the Tasks tab, open a completed task and review:

  • Total task duration: Baseline for improvement
  • Per-step timing: Identify the slowest steps
  • Credit consumption: Which steps cost the most
  • Step count: Total actions executed per task

Common bottlenecks:

BottleneckSymptomFix
Large model on simple taskHigh credit cost, slowSwitch to Gemini Flash
Too many LLM stepsLong total durationConsolidate into fewer steps
Agent Step with many skillsUnpredictable pathReduce to 2-4 focused skills
Knowledge Base over-queryingMultiple KB searchesIncrease Max Results per query
Sequential when parallel possibleUnnecessary waitingUse loop with Max Concurrent > 1

Step 2: Right-Size Model Selection

The single biggest performance lever. Match model to task complexity:

TaskRecommended ModelSpeedCredits
Route email to categoryGemini FlashFast~1
Extract fields from textGPT-4o-miniFast~2
Draft short responseClaude SonnetMedium~3
Complex multi-step analysisGPT-4 / Claude OpusSlow~10
Simple phone callGemini FlashFast~20/min
Complex phone conversationClaude SonnetMedium~20/min

Rule of thumb: Start with the smallest model. Only upgrade if output quality is insufficient. Most classification and routing tasks work fine with Gemini Flash.

Step 3: Consolidate LLM Steps

Before (3 LLM calls, ~9 credits):

Step 1: Classify email (LLM)
Step 2: Extract key entities (LLM)
Step 3: Generate response (LLM)

After (1 LLM call, ~3 credits):

Step 1: Classify, extract entities, and generate response (single LLM prompt)

Consolidated prompt:

Analyze this email and return JSON with:
1. "classification": one of [billing, technical, general]
2. "entities": {customer_name, product, issue_type}
3. "draft_response": professional reply under 150 words

Email: {{email_received.body}}

Step 4: Use Deterministic Actions Where Possible

Replace AI-powered fields with Set Manually mode when values are predictable:

FieldInstead of AI PromptUse Set Manually
Slack channel"Post to the support channel"
#support-triage
Email subject"Create an appropriate subject"
[Ticket] {{email_received.subject}}
Sheet column"Determine the right column"Column A

Each Set Manually field saves one LLM inference (~1 credit).

Step 5: Optimize Knowledge Base Queries

  • Max Results: Set to the minimum needed (default 4, max 10)
  • Search Fuzziness: Keep at 100 (semantic) unless precision matching needed
  • Query mode: Use AI Prompt with specific instructions:
    Search for the customer's specific product issue.
    Focus on: {{extracted_entities.product}} {{extracted_entities.issue_type}}
    
    Not: "Search for relevant information" (too vague, wastes results)

Step 6: Optimize Trigger Filters

Prevent wasted runs with precise trigger filters:

Before: Email Received (all emails) → 200 runs/day → 600 credits
After:  Email Received (label: "support" AND NOT from: "noreply@")
        → 30 runs/day → 90 credits (85% savings)

Step 7: Use Agent Steps Judiciously

Agent Steps (autonomous mode) are powerful but expensive — the agent may take unpredictable paths and use more actions than a deterministic workflow.

Use Agent Steps when: Next steps are genuinely uncertain (complex research, multi-source investigation, adaptive problem-solving)

Use deterministic actions when: Steps are predictable (classify -> route -> respond)

When using Agent Steps:

  • Limit available skills to 2-4
  • Set clear, measurable exit conditions
  • Include a fallback exit condition to prevent infinite loops
  • Monitor credit consumption of first 10 runs to establish baseline

Step 8: Loop Optimization

For batch processing, configure loops for efficiency:

  • Max Concurrent: Increase for independent items (parallel execution)
  • Max Cycles: Always set a cap to prevent runaway processing
  • Only pass essential data as loop output (not full context)

Performance Baseline Reference

Agent TypeExpected DurationExpected Credits
Simple router (1 LLM + 1 action)2-5 seconds1-2
Email triage (classify + respond)5-15 seconds3-5
Research agent (search + analyze)15-60 seconds5-15
Multi-agent pipeline30-120 seconds10-30
Phone callReal-time~20/min

Error Handling

IssueCauseSolution
Agent timeoutToo many sequential stepsConsolidate steps, reduce skill count
High credit burnLarge model + many stepsDowngrade model, merge LLM calls
Inconsistent outputAgent Step choosing different pathsSwitch to deterministic workflow
KB search slowLarge knowledge baseReduce fuzziness, increase specificity
Loop runs too longHigh max cycles, low concurrencyIncrease Max Concurrent, lower Max Cycles

Resources

Next Steps

Proceed to

lindy-cost-tuning
for budget optimization.