Awesome-omni-skills trigger-dev

Trigger.dev Integration workflow skill. Use this skill when the user needs Trigger.dev expert for background jobs, AI workflows, and reliable and the operator should preserve the upstream workflow, copied support files, and provenance before merging or handing off.

install
source · Clone the upstream repo
git clone https://github.com/diegosouzapw/awesome-omni-skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/diegosouzapw/awesome-omni-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/trigger-dev" ~/.claude/skills/diegosouzapw-awesome-omni-skills-trigger-dev && rm -rf "$T"
manifest: skills/trigger-dev/SKILL.md
source content

Trigger.dev Integration

Overview

This public intake copy packages

plugins/antigravity-awesome-skills-claude/skills/trigger-dev
from
https://github.com/sickn33/antigravity-awesome-skills
into the native Omni Skills editorial shape without hiding its origin.

Use it when the operator needs the upstream workflow, support files, and repository context to stay intact while the public validator and private enhancer continue their normal downstream flow.

This intake keeps the copied upstream files intact and uses

metadata.json
plus
ORIGIN.md
as the provenance anchor for review.

Trigger.dev Integration Trigger.dev expert for background jobs, AI workflows, and reliable async execution with excellent developer experience and TypeScript-first design.

Imported source sections that did not map cleanly to the public headings are still preserved below or in the support files. Notable imported sections: Capabilities, Scope, Tooling, Patterns, Sharp Edges, Validation Checks.

When to Use This Skill

Use this section as the trigger filter. It should make the activation boundary explicit before the operator loads files, runs commands, or opens a pull request.

  • User mentions or implies: trigger.dev
  • User mentions or implies: trigger dev
  • User mentions or implies: background task
  • User mentions or implies: ai background job
  • User mentions or implies: long running task
  • User mentions or implies: integration task

Operating Table

SituationStart hereWhy it matters
First-time use
metadata.json
Confirms repository, branch, commit, and imported path before touching the copied workflow
Provenance review
ORIGIN.md
Gives reviewers a plain-language audit trail for the imported source
Workflow execution
SKILL.md
Starts with the smallest copied file that materially changes execution
Supporting context
SKILL.md
Adds the next most relevant copied source file without loading the entire package
Handoff decision
## Related Skills
Helps the operator switch to a stronger native skill when the task drifts

Workflow

This workflow is intentionally editorial and operational at the same time. It keeps the imported source useful to the operator while still satisfying the public intake standards that feed the downstream enhancer flow.

  1. Confirm the user goal, the scope of the imported workflow, and whether this skill is still the right router for the task.
  2. Read the overview and provenance files before loading any copied upstream support files.
  3. Load only the references, examples, prompts, or scripts that materially change the outcome for the current request.
  4. Execute the upstream workflow while keeping provenance and source boundaries explicit in the working notes.
  5. Validate the result against the upstream expectations and the evidence you can point to in the copied files.
  6. Escalate or hand off to a related skill when the work moves out of this imported workflow's center of gravity.
  7. Before merge or closure, record what was used, what changed, and what the reviewer still needs to verify.

Imported Workflow Notes

Imported: Capabilities

  • trigger-dev-tasks
  • ai-background-jobs
  • integration-tasks
  • scheduled-triggers
  • webhook-handlers
  • long-running-tasks
  • task-queues
  • batch-processing

Examples

Example 1: Ask for the upstream workflow directly

Use @trigger-dev to handle <task>. Start from the copied upstream workflow, load only the files that change the outcome, and keep provenance visible in the answer.

Explanation: This is the safest starting point when the operator needs the imported workflow, but not the entire repository.

Example 2: Ask for a provenance-grounded review

Review @trigger-dev against metadata.json and ORIGIN.md, then explain which copied upstream files you would load first and why.

Explanation: Use this before review or troubleshooting when you need a precise, auditable explanation of origin and file selection.

Example 3: Narrow the copied support files before execution

Use @trigger-dev for <task>. Load only the copied references, examples, or scripts that change the outcome, and name the files explicitly before proceeding.

Explanation: This keeps the skill aligned with progressive disclosure instead of loading the whole copied package by default.

Example 4: Build a reviewer packet

Review @trigger-dev using the copied upstream files plus provenance, then summarize any gaps before merge.

Explanation: This is useful when the PR is waiting for human review and you want a repeatable audit packet.

Best Practices

Treat the generated public skill as a reviewable packaging layer around the upstream repository. The goal is to keep provenance explicit and load only the copied source material that materially improves execution.

  • Tasks are the building blocks - each task is independently retryable
  • Runs are durable - state survives crashes and restarts
  • Integrations are first-class - use built-in API wrappers for reliability
  • Logs are your debugging lifeline - log liberally in tasks
  • Concurrency protects your resources - always set limits
  • Delays and schedules are built-in - no external cron needed
  • AI-ready by design - long-running AI tasks just work

Imported Operating Notes

Imported: Principles

  • Tasks are the building blocks - each task is independently retryable
  • Runs are durable - state survives crashes and restarts
  • Integrations are first-class - use built-in API wrappers for reliability
  • Logs are your debugging lifeline - log liberally in tasks
  • Concurrency protects your resources - always set limits
  • Delays and schedules are built-in - no external cron needed
  • AI-ready by design - long-running AI tasks just work
  • Local development matches production - use the CLI

Troubleshooting

Problem: The operator skipped the imported context and answered too generically

Symptoms: The result ignores the upstream workflow in

plugins/antigravity-awesome-skills-claude/skills/trigger-dev
, fails to mention provenance, or does not use any copied source files at all. Solution: Re-open
metadata.json
,
ORIGIN.md
, and the most relevant copied upstream files. Load only the files that materially change the answer, then restate the provenance before continuing.

Problem: The imported workflow feels incomplete during review

Symptoms: Reviewers can see the generated

SKILL.md
, but they cannot quickly tell which references, examples, or scripts matter for the current task. Solution: Point at the exact copied references, examples, scripts, or assets that justify the path you took. If the gap is still real, record it in the PR instead of hiding it.

Problem: The task drifted into a different specialization

Symptoms: The imported skill starts in the right place, but the work turns into debugging, architecture, design, security, or release orchestration that a native skill handles better. Solution: Use the related skills section to hand off deliberately. Keep the imported provenance visible so the next skill inherits the right context instead of starting blind.

Related Skills

  • @supply-chain-risk-auditor
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @sveltekit
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @swift-concurrency-expert
    - Use when the work is better handled by that native specialization after this imported skill establishes context.
  • @swiftui-expert-skill
    - Use when the work is better handled by that native specialization after this imported skill establishes context.

Additional Resources

Use this support matrix and the linked files below as the operator packet for this imported skill. They should reflect real copied source material, not generic scaffolding.

Resource familyWhat it gives the reviewerExample path
references
copied reference notes, guides, or background material from upstream
references/n/a
examples
worked examples or reusable prompts copied from upstream
examples/n/a
scripts
upstream helper scripts that change execution or validation
scripts/n/a
agents
routing or delegation notes that are genuinely part of the imported package
agents/n/a
assets
supporting assets or schemas copied from the source package
assets/n/a

Imported Reference Notes

Imported: Scope

  • redis-queues -> bullmq-specialist
  • pure-event-driven -> inngest
  • workflow-orchestration -> temporal-craftsman
  • infrastructure -> infra-architect

Imported: Tooling

Core

  • trigger-dev-sdk
  • trigger-cli

Frameworks

  • nextjs
  • remix
  • express
  • hono

Integrations

  • openai
  • anthropic
  • resend
  • stripe
  • slack
  • supabase

Deployment

  • trigger-cloud
  • self-hosted
  • docker

Imported: Patterns

Basic Task Setup

Setting up Trigger.dev in a Next.js project

When to use: Starting with Trigger.dev in any project

// trigger.config.ts import { defineConfig } from '@trigger.dev/sdk/v3';

export default defineConfig({ project: 'my-project', runtime: 'node', logLevel: 'log', retries: { enabledInDev: true, default: { maxAttempts: 3, minTimeoutInMs: 1000, maxTimeoutInMs: 10000, factor: 2, }, }, });

// src/trigger/tasks.ts import { task, logger } from '@trigger.dev/sdk/v3';

export const helloWorld = task({ id: 'hello-world', run: async (payload: { name: string }) => { logger.log('Processing hello world', { payload });

// Simulate work
await new Promise(resolve => setTimeout(resolve, 1000));

return { message: `Hello, ${payload.name}!` };

}, });

// Triggering from your app import { helloWorld } from '@/trigger/tasks';

// Fire and forget await helloWorld.trigger({ name: 'World' });

// Wait for result const handle = await helloWorld.trigger({ name: 'World' }); const result = await handle.wait();

AI Task with OpenAI Integration

Using built-in OpenAI integration with automatic retries

When to use: Building AI-powered background tasks

import { task, logger } from '@trigger.dev/sdk/v3'; import { openai } from '@trigger.dev/openai';

// Configure OpenAI with Trigger.dev const openaiClient = openai.configure({ id: 'openai', apiKey: process.env.OPENAI_API_KEY, });

export const generateContent = task({ id: 'generate-content', retry: { maxAttempts: 3, }, run: async (payload: { topic: string; style: string }) => { logger.log('Generating content', { topic: payload.topic });

// Uses Trigger.dev's OpenAI integration - handles retries automatically
const completion = await openaiClient.chat.completions.create({
  model: 'gpt-4-turbo-preview',
  messages: [
    {
      role: 'system',
      content: `You are a ${payload.style} writer.`,
    },
    {
      role: 'user',
      content: `Write about: ${payload.topic}`,
    },
  ],
});

const content = completion.choices[0].message.content;
logger.log('Generated content', { length: content?.length });

return { content, tokens: completion.usage?.total_tokens };

}, });

Scheduled Task with Cron

Tasks that run on a schedule

When to use: Periodic jobs like reports, cleanup, or syncs

import { schedules, task, logger } from '@trigger.dev/sdk/v3';

export const dailyCleanup = schedules.task({ id: 'daily-cleanup', cron: '0 2 * * *', // 2 AM daily run: async () => { logger.log('Starting daily cleanup');

// Clean up old records
const deleted = await db.logs.deleteMany({
  where: {
    createdAt: { lt: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000) },
  },
});

logger.log('Cleanup complete', { deletedCount: deleted.count });

return { deleted: deleted.count };

}, });

// Weekly report export const weeklyReport = schedules.task({ id: 'weekly-report', cron: '0 9 * * 1', // Monday 9 AM run: async () => { const stats = await generateWeeklyStats(); await sendReportEmail(stats); return stats; }, });

Batch Processing

Processing large datasets in batches

When to use: Need to process many items with rate limiting

import { task, logger, wait } from '@trigger.dev/sdk/v3';

export const processBatch = task({ id: 'process-batch', queue: { concurrencyLimit: 5, // Only 5 running at once }, run: async (payload: { items: string[] }) => { const results = [];

for (const item of payload.items) {
  logger.log('Processing item', { item });

  const result = await processItem(item);
  results.push(result);

  // Respect rate limits
  await wait.for({ seconds: 1 });
}

return { processed: results.length, results };

}, });

// Trigger batch processing export const startBatchJob = task({ id: 'start-batch', run: async (payload: { datasetId: string }) => { const items = await fetchDataset(payload.datasetId);

// Split into chunks of 100
const chunks = chunkArray(items, 100);

// Trigger parallel batch tasks
const handles = await Promise.all(
  chunks.map(chunk => processBatch.trigger({ items: chunk }))
);

logger.log('Started batch processing', {
  totalItems: items.length,
  batches: chunks.length,
});

return { batches: handles.length };

}, });

Webhook Handler

Processing webhooks reliably with deduplication

When to use: Handling webhooks from Stripe, GitHub, etc.

import { task, logger, idempotencyKeys } from '@trigger.dev/sdk/v3';

export const handleStripeEvent = task({ id: 'handle-stripe-event', run: async (payload: { eventId: string; type: string; data: any; }) => { // Idempotency based on Stripe event ID const idempotencyKey = await idempotencyKeys.create(payload.eventId);

if (idempotencyKey.isNew === false) {
  logger.log('Duplicate event, skipping', { eventId: payload.eventId });
  return { skipped: true };
}

logger.log('Processing Stripe event', {
  type: payload.type,
  eventId: payload.eventId,
});

switch (payload.type) {
  case 'checkout.session.completed':
    await handleCheckoutComplete(payload.data);
    break;
  case 'customer.subscription.updated':
    await handleSubscriptionUpdate(payload.data);
    break;
}

return { processed: true, type: payload.type };

}, });

Imported: Sharp Edges

Task timeout kills execution without clear error

Severity: CRITICAL

Situation: Long-running AI task or batch process suddenly stops. No error in logs. Task shows as failed in dashboard but no stack trace. Data partially processed.

Symptoms:

  • Task fails with no error message
  • Partial data processing
  • Works locally, fails in production
  • "Task timed out" in dashboard

Why this breaks: Trigger.dev has execution timeouts (defaults vary by plan). When exceeded, the task is killed mid-execution. If you're not logging progress, you won't know where it stopped. This is especially common with AI tasks that can take minutes.

Recommended fix:

Configure explicit timeouts:

export const processDocument = task({
  id: 'process-document',
  machine: {
    preset: 'large-2x',  // More resources = longer allowed time
  },
  run: async (payload) => {
    logger.log('Starting document processing', { docId: payload.id });

    // Log progress at each step
    logger.log('Step 1: Extracting text');
    const text = await extractText(payload.fileUrl);

    logger.log('Step 2: Generating embeddings', { textLength: text.length });
    const embeddings = await generateEmbeddings(text);

    logger.log('Step 3: Storing vectors', { count: embeddings.length });
    await storeVectors(embeddings);

    logger.log('Completed successfully');
    return { processed: true };
  },
});

For very long tasks, break into subtasks:

  • Use triggerAndWait for sequential steps
  • Each subtask has its own timeout
  • Progress is visible in dashboard

Non-serializable payload causes silent task failure

Severity: CRITICAL

Situation: Passing Date objects, class instances, or circular references in payload. Task queued but never runs. Or runs with undefined/null values.

Symptoms:

  • Payload values are undefined in task
  • Date objects become strings
  • Class methods not available
  • "Converting circular structure to JSON"

Why this breaks: Trigger.dev serializes payloads to JSON. Dates become strings, class instances lose methods, functions disappear, circular refs throw. Your task sees different data than you sent.

Recommended fix:

Always use plain objects:

// WRONG - Date becomes string
await myTask.trigger({ createdAt: new Date() });

// RIGHT - ISO string
await myTask.trigger({ createdAt: new Date().toISOString() });

// WRONG - Class instance
await myTask.trigger({ user: new User(data) });

// RIGHT - Plain object
await myTask.trigger({ user: { id: data.id, email: data.email } });

// WRONG - Circular reference
const obj = { parent: null };
obj.parent = obj;
await myTask.trigger(obj);  // Throws!

In task, reconstitute as needed:

run: async (payload: { createdAt: string }) => {
  const date = new Date(payload.createdAt);
  // ...
}

Environment variables not synced to Trigger.dev cloud

Severity: CRITICAL

Situation: Task works locally but fails in production. Env var that exists in Vercel is undefined in Trigger.dev. API calls fail, database connections fail.

Symptoms:

  • "Environment variable not found"
  • API calls return 401 in production tasks
  • Works in dev, fails in production
  • Database connection errors in tasks

Why this breaks: Trigger.dev runs tasks in its own cloud, separate from your Vercel/Railway deployment. Environment variables must be configured in BOTH places. They don't automatically sync.

Recommended fix:

Sync env vars to Trigger.dev:

  1. Go to Trigger.dev dashboard
  2. Project Settings > Environment Variables
  3. Add ALL required env vars

Or use CLI:

# Create .env.trigger file
DATABASE_URL=postgres://...
OPENAI_API_KEY=sk-...
STRIPE_SECRET_KEY=sk_live_...

# Push to Trigger.dev
npx trigger.dev@latest env push

Common missing vars:

  • DATABASE_URL
  • OPENAI_API_KEY / ANTHROPIC_API_KEY
  • STRIPE_SECRET_KEY
  • Service API keys
  • Internal service URLs

Test in staging:

Trigger.dev has separate envs - configure staging too

SDK version mismatch between CLI and package

Severity: HIGH

Situation: Updated @trigger.dev/sdk but forgot to update CLI. Or vice versa. Tasks fail to register. Weird type errors. Dev server crashes.

Symptoms:

  • Tasks not appearing in dashboard
  • Type errors in trigger.config.ts
  • "Failed to register task"
  • Dev server crashes on start

Why this breaks: The Trigger.dev SDK and CLI must be on compatible versions. Breaking changes between versions cause registration failures. The CLI generates types that must match the SDK.

Recommended fix:

Always update together:

# Update both SDK and CLI
npm install @trigger.dev/sdk@latest
npx trigger.dev@latest dev

# Or pin to same version
npm install @trigger.dev/sdk@3.3.0
npx trigger.dev@3.3.0 dev

Check versions:

npx trigger.dev@latest --version
npm list @trigger.dev/sdk

In CI/CD:

- run: npm install @trigger.dev/sdk@${{ env.TRIGGER_VERSION }}
- run: npx trigger.dev@${{ env.TRIGGER_VERSION }} deploy

Task retries cause duplicate side effects

Severity: HIGH

Situation: Task sends email, then fails on next step. Retry sends email again. Customer gets 3 identical emails. Or 3 Stripe charges. Or 3 Slack messages.

Symptoms:

  • Duplicate emails on retry
  • Multiple charges for same order
  • Duplicate webhook deliveries
  • Data inserted multiple times

Why this breaks: Trigger.dev retries failed tasks from the beginning. If your task has side effects before the failure point, those execute again. Without idempotency, you create duplicates.

Recommended fix:

Use idempotency keys:

import { task, idempotencyKeys } from '@trigger.dev/sdk/v3';

export const sendOrderEmail = task({
  id: 'send-order-email',
  run: async (payload: { orderId: string }) => {
    // Check if already sent
    const key = await idempotencyKeys.create(`email-${payload.orderId}`);

    if (!key.isNew) {
      logger.log('Email already sent, skipping');
      return { skipped: true };
    }

    await sendEmail(payload.orderId);
    return { sent: true };
  },
});

Alternative: Track in database

const existing = await db.emailLogs.findUnique({
  where: { orderId_type: { orderId, type: 'order_confirmation' } }
});

if (existing) {
  logger.log('Already sent');
  return;
}

await sendEmail(orderId);
await db.emailLogs.create({ data: { orderId, type: 'order_confirmation' } });

High concurrency overwhelms downstream services

Severity: HIGH

Situation: Burst of 1000 tasks triggered. All hit OpenAI API simultaneously. Rate limited. All fail. Retry. Rate limited again. Vicious cycle.

Symptoms:

  • Rate limit errors (429)
  • Database connection pool exhausted
  • API returns "too many requests"
  • Mass task failures

Why this breaks: Trigger.dev scales to handle many concurrent tasks. But your downstream APIs (OpenAI, databases, external services) have rate limits. Without concurrency control, you overwhelm them.

Recommended fix:

Set queue concurrency limits:

export const callOpenAI = task({
  id: 'call-openai',
  queue: {
    concurrencyLimit: 10,  // Only 10 running at once
  },
  run: async (payload) => {
    // Protected by concurrency limit
    return await openai.chat.completions.create(payload);
  },
});

For rate-limited APIs:

export const callRateLimitedAPI = task({
  id: 'call-api',
  queue: {
    concurrencyLimit: 5,
  },
  retry: {
    maxAttempts: 5,
    minTimeoutInMs: 5000,  // Wait before retry
    factor: 2,  // Exponential backoff
  },
  run: async (payload) => {
    // Add delay between calls
    await wait.for({ milliseconds: 200 });
    return await externalAPI.call(payload);
  },
});

Start conservative:

  • 5-10 for external APIs
  • 20-50 for databases
  • Increase based on monitoring

trigger.config.ts not at project root

Severity: HIGH

Situation: Running npx trigger.dev dev but CLI can't find config. Or config exists but in wrong location (monorepo issue).

Symptoms:

  • "Could not find trigger.config.ts"
  • Tasks not discovered
  • Empty task list in dashboard
  • Works for one package, not another

Why this breaks: The CLI looks for trigger.config.ts at the current working directory. In monorepos, you must run from the package directory, not the root. Wrong location = tasks not discovered.

Recommended fix:

Config must be at package root:

my-app/
├── trigger.config.ts  <- Here
├── package.json
├── src/
│   └── trigger/
│       └── tasks.ts

In monorepos:

monorepo/
├── apps/
│   └── web/
│       ├── trigger.config.ts  <- Here, not at monorepo root
│       ├── package.json
│       └── src/trigger/

# Run from package directory
cd apps/web && npx trigger.dev dev

Specify config location:

npx trigger.dev dev --config ./apps/web/trigger.config.ts

wait.for in loops causes memory issues

Severity: MEDIUM

Situation: Processing thousands of items with wait.for between each. Task memory grows. Eventually killed for memory.

Symptoms:

  • Task killed for memory
  • Slow task execution
  • State blob too large error
  • Works for small batches, fails for large

Why this breaks: Each wait.for creates checkpoint state. In a loop with thousands of iterations, this accumulates. The task's state blob grows until it hits memory limits.

Recommended fix:

Batch instead of individual waits:

// WRONG - Wait per item
for (const item of items) {
  await processItem(item);
  await wait.for({ milliseconds: 100 });  // 1000 waits = bloated state
}

// RIGHT - Batch processing
const chunks = chunkArray(items, 50);
for (const chunk of chunks) {
  await Promise.all(chunk.map(processItem));
  await wait.for({ milliseconds: 500 });  // Only 20 waits
}

For very large datasets, use subtasks:

export const processAll = task({
  id: 'process-all',
  run: async (payload: { items: string[] }) => {
    const chunks = chunkArray(payload.items, 100);

    // Each chunk is a separate task
    await Promise.all(
      chunks.map(chunk =>
        processChunk.triggerAndWait({ items: chunk })
      )
    );
  },
});

Using raw SDK instead of Trigger.dev integrations

Severity: MEDIUM

Situation: Using OpenAI SDK directly. API call fails. No automatic retry. Rate limits not handled. Have to implement all resilience manually.

Symptoms:

  • Manual retry logic in tasks
  • Rate limit errors not handled
  • No automatic logging of API calls
  • Inconsistent error handling

Why this breaks: Trigger.dev integrations wrap SDKs with automatic retries, rate limit handling, and proper logging. Using raw SDKs means you lose these features and have to implement them yourself.

Recommended fix:

Use integrations when available:

// WRONG - Raw SDK
import OpenAI from 'openai';
const openai = new OpenAI();

// RIGHT - Trigger.dev integration
import { openai } from '@trigger.dev/openai';

const openaiClient = openai.configure({
  id: 'openai',
  apiKey: process.env.OPENAI_API_KEY,
});

// Now has automatic retries and rate limiting
export const generateContent = task({
  id: 'generate-content',
  run: async (payload) => {
    const response = await openaiClient.chat.completions.create({
      model: 'gpt-4-turbo-preview',
      messages: [{ role: 'user', content: payload.prompt }],
    });
    return response;
  },
});

Available integrations:

  • @trigger.dev/openai
  • @trigger.dev/anthropic
  • @trigger.dev/resend
  • @trigger.dev/slack
  • @trigger.dev/stripe

Triggering tasks without dev server running

Severity: MEDIUM

Situation: Called task.trigger() but nothing happens. No errors either. Task just disappears into void. Dev server wasn't running.

Symptoms:

  • Triggers don't run
  • No task in dashboard
  • No errors, just silence
  • Works in production, not dev

Why this breaks: In development, tasks run through the local dev server (npx trigger.dev dev). If it's not running, triggers queue up or fail silently depending on configuration. Production works differently.

Recommended fix:

Always run dev server during development:

# Terminal 1: Your app
npm run dev

# Terminal 2: Trigger.dev dev server
npx trigger.dev dev

Check dev server is connected:

  • Should show "Connected to Trigger.dev"
  • Tasks should appear in console
  • Dashboard shows task registrations

In package.json:

{
  "scripts": {
    "dev": "next dev",
    "trigger:dev": "trigger.dev dev",
    "dev:all": "concurrently \"npm run dev\" \"npm run trigger:dev\""
  }
}

Imported: Validation Checks

Task without logging

Severity: WARNING

Message: Task has no logging. Add logger.log() calls for debugging in production.

Fix action: Import { logger } from '@trigger.dev/sdk/v3' and add log statements

Task without error handling

Severity: ERROR

Message: Task lacks explicit error handling. Unhandled errors may cause unclear failures.

Fix action: Wrap task logic in try/catch and log errors with context

Task without concurrency limit

Severity: WARNING

Message: Task has no concurrency limit. High load may overwhelm downstream services.

Fix action: Add queue: { concurrencyLimit: 10 } to protect APIs and databases

Date object in trigger payload

Severity: ERROR

Message: Date objects are serialized to strings. Use ISO string format instead.

Fix action: Use date.toISOString() instead of new Date()

Class instance in trigger payload

Severity: ERROR

Message: Class instances lose methods when serialized. Use plain objects.

Fix action: Convert class instance to plain object before triggering

Task without explicit ID

Severity: ERROR

Message: Task must have an explicit id property for registration.

Fix action: Add id: 'my-task-name' to task definition

Trigger.dev API key hardcoded

Severity: CRITICAL

Message: Trigger.dev API key should not be hardcoded - use TRIGGER_SECRET_KEY env var

Fix action: Remove hardcoded key and use process.env.TRIGGER_SECRET_KEY

Using raw OpenAI SDK instead of integration

Severity: WARNING

Message: Consider using @trigger.dev/openai for automatic retries and rate limiting

Fix action: Replace with: import { openai } from '@trigger.dev/openai'

Using raw Anthropic SDK instead of integration

Severity: WARNING

Message: Consider using @trigger.dev/anthropic for automatic retries and rate limiting

Fix action: Replace with: import { anthropic } from '@trigger.dev/anthropic'

wait.for inside loop

Severity: WARNING

Message: wait.for in loops creates many checkpoints. Consider batching instead.

Fix action: Batch items and use fewer waits, or split into subtasks

Imported: Collaboration

Delegation Triggers

  • redis|bullmq|traditional queue -> bullmq-specialist (Need Redis-backed queues instead of managed service)
  • vercel|deployment|serverless -> vercel-deployment (Trigger.dev needs deployment config)
  • database|postgres|supabase -> supabase-backend (Tasks need database access)
  • openai|anthropic|ai model|llm -> llm-architect (Tasks need AI model integration)
  • event-driven|event sourcing|fan out -> inngest (Need pure event-driven model)

AI Background Processing

Skills: trigger-dev, llm-architect, nextjs-app-router, supabase-backend

Workflow:

1. User triggers via UI (nextjs-app-router)
2. Task queued (trigger-dev)
3. AI processing (llm-architect)
4. Results stored (supabase-backend)

Webhook Processing Pipeline

Skills: trigger-dev, stripe-integration, email-systems, supabase-backend

Workflow:

1. Webhook received (stripe-integration)
2. Task triggered (trigger-dev)
3. Database updated (supabase-backend)
4. Notification sent (email-systems)

Batch Data Processing

Skills: trigger-dev, supabase-backend, backend

Workflow:

1. Batch job triggered (backend)
2. Data chunked and processed (trigger-dev)
3. Results aggregated (supabase-backend)

Scheduled Reports

Skills: trigger-dev, supabase-backend, email-systems

Workflow:

1. Cron triggers task (trigger-dev)
2. Data aggregated (supabase-backend)
3. Report generated and sent (email-systems)

Imported: Limitations

  • Use this skill only when the task clearly matches the scope described above.
  • Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
  • Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.