Claude-code-plugins-plus-skills sentry-reliability-patterns
git clone https://github.com/jeremylongshore/claude-code-plugins-plus-skills
T=$(mktemp -d) && git clone --depth=1 https://github.com/jeremylongshore/claude-code-plugins-plus-skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/plugins/saas-packs/sentry-pack/skills/sentry-reliability-patterns" ~/.claude/skills/jeremylongshore-claude-code-plugins-plus-skills-sentry-reliability-patterns && rm -rf "$T"
plugins/saas-packs/sentry-pack/skills/sentry-reliability-patterns/SKILL.mdSentry Reliability Patterns
Overview
Build Sentry integrations that never take your application down via three pillars: safe initialization with graceful degradation, a circuit breaker that stops hammering Sentry when unreachable, and an offline event queue that buffers errors during outages. Every pattern prioritizes application uptime over telemetry completeness.
Prerequisites
v8+ (TypeScript) or@sentry/node
v2+ (Python)sentry-sdk- A valid Sentry DSN from project settings at
sentry.io - A fallback logging destination decided (console, file, or external logger)
- Understanding of your application shutdown lifecycle (signal handlers, container orchestration)
Instructions
Step 1 — Safe Initialization with Graceful Degradation
Wrap
Sentry.init() in try/catch so an invalid DSN, network error, or SDK bug never crashes the app. Track initialization state with a boolean flag. Protect beforeSend callbacks with their own error boundary.
Create
lib/sentry-safe.ts with initSentrySafe() and captureError(). See graceful-degradation.md for full implementation.
Key rules:
- Never let
crash the process — wrap in try/catch, setSentry.init()
on failuresentryAvailable = false - Verify client creation with
— invalid DSNs silently produce no clientSentry.getClient() - Always log errors locally as baseline before attempting Sentry capture
- Wrap user-supplied
hooks in nested try/catch — return raw event on hook failurebeforeSend
Step 2 — Circuit Breaker for Sentry Outages
When Sentry is unreachable, continued attempts waste resources and add latency. Track consecutive failures and trip open after a threshold. After cooldown, enter half-open state and send a single probe.
Implement
SentryCircuitBreaker class with closed/open/half-open states. See circuit-breaker-pattern.md for full implementation. Expose state via health-checks.md endpoint.
Key rules:
- Default: 5 failures to trip open, 60-second cooldown before half-open probe
- In open state, skip Sentry calls entirely and log to fallback
- On half-open success, reset to closed with zero failure count
- Expose
for health check endpoints and monitoring dashboardsgetStatus()
Step 3 — Offline Queue, Custom Transport, and Graceful Shutdown
Buffer events when network is unavailable and replay on reconnect. Use bounded file-based queue to survive restarts. Pair with signal handlers that flush via
Sentry.close() before process exit.
Implement three modules:
—lib/sentry-offline-queue.ts
andenqueueEvent()
. See network-failure-handling.mddrainQueue()
— Custom transport with exponential backoff retry. See timeout-handling.mdlib/sentry-transport.ts
—lib/sentry-shutdown.ts
/SIGTERM
handlers callingSIGINT
. See timeout-handling.mdSentry.close(2000)
Key rules:
- Cap offline queue at 1000 events, evict oldest when full
- Drain queue on startup and when connectivity restores
- Call
beforeSentry.close(timeout)
— without it, in-flight events are silently droppedprocess.exit() - For critical errors, use dual-write-pattern.md to send to multiple destinations via
Promise.allSettled
Output
- Safe init wrapper catching SDK failures, starting app in degraded mode
with automatic fallback to local loggingcaptureError()- Circuit breaker stopping sends after repeated failures, self-healing after cooldown
- Health check endpoint exposing SDK status and circuit breaker state
- File-based offline queue buffering events during outages, draining on reconnect
- Signal handlers flushing in-flight events before process exit
- Custom transport with exponential-backoff retry logic
Error Handling
| Error | Cause | Solution |
|---|---|---|
App crashes on | Invalid DSN or SDK bug | Wrap in try/catch via |
Events lost on | No before exit | Register signal handlers with |
| Sentry outage cascades latency | Every error path hits Sentry HTTP | Circuit breaker trips after 5 failures |
| Events lost during network blip | SDK drops events silently | Retry transport + offline queue |
| Silent event loss | SDK fails without throwing | Health check probes with + |
| Queue grows unbounded | Never drained, Sentry permanently down | Cap at 1000 events, drain on startup |
crashes pipeline | User hook throws | Nested try/catch, return raw event |
See errors.md for extended troubleshooting.
Examples
See examples.md for complete TypeScript and Python integration examples including full-stack wiring of all three patterns.
Resources
- Sentry JS Configuration —
,beforeSend
, init optionssampleRate - Custom Transports — retry and offline transports
- Shutdown & Draining —
andSentry.close()Sentry.flush() - Sentry Python SDK —
,sentry_sdk.init()
, scope managementflush() - Sentry Status Page — monitor platform outages
Next Steps
- Emit circuit breaker state changes to observability platform (Datadog, Prometheus) for outage alerting
- Set up periodic
viadrainQueue()
(Node) or cron (Python) instead of startup-onlysetInterval - Apply retry transport pattern to Python via
parametersentry_sdk.init(transport=...) - Test failure modes in staging — simulate Sentry failures with
to verify circuit breaker behaviorbeforeSend - Add dual-write for P0/fatal errors to secondary destinations (CloudWatch, PagerDuty)