Skilllibrary cloudflare-worker-patterns
Write and optimize Cloudflare Workers — implement fetch handlers, Durable Objects for stateful logic, service bindings, KV/R2 data access, scheduled/cron triggers, and middleware patterns with wrangler dev/deploy. Use when writing Worker code, debugging Worker runtime behavior, or designing Durable Object state machines. Do not use for Cloudflare DNS/WAF/Pages configuration (prefer cloudflare skill) or non-Worker serverless platforms.
install
source · Clone the upstream repo
git clone https://github.com/merceralex397-collab/skilllibrary
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/merceralex397-collab/skilllibrary "$T" && mkdir -p ~/.claude/skills && cp -r "$T/14-cloud-platform-devops/cloudflare-worker-patterns" ~/.claude/skills/merceralex397-collab-skilllibrary-cloudflare-worker-patterns && rm -rf "$T"
manifest:
14-cloud-platform-devops/cloudflare-worker-patterns/SKILL.mdsource content
Purpose
Write, structure, and optimize Cloudflare Worker application code — fetch handlers, Durable Objects for stateful coordination, service bindings for Worker-to-Worker calls, KV/R2/D1 data access patterns, scheduled/cron handlers, and middleware/router patterns using the Workers runtime.
When to use this skill
- Implementing a
handler that routes requests, transforms responses, or proxies origins.fetch - Creating Durable Objects for stateful logic (counters, rate limiters, WebSocket rooms, coordination).
- Setting up service bindings to call one Worker from another without network hops.
- Reading/writing KV, R2, or D1 from within Worker code using environment bindings.
- Implementing
event handlers for cron-triggered background tasks.scheduled - Building middleware chains (auth, logging, CORS) in a Worker router framework (Hono, itty-router).
- Debugging Worker runtime errors, CPU time limits, or memory issues with
andwrangler dev
.wrangler tail - Writing Worker unit tests using Miniflare or
(Vitest integration).wrangler test
Do not use this skill when
- The task is about Cloudflare DNS records, WAF rules, cache page rules, or Pages build config — prefer
.cloudflare - The target is AWS Lambda, GCP Cloud Functions, or another non-Cloudflare serverless platform — prefer
.serverless-patterns - The task involves only
bindings without any Worker code changes — preferwrangler.toml
.cloudflare - The focus is generic serverless architecture design without Cloudflare-specific constraints.
Operating procedure
- Identify the Worker entry point. Locate the
(orsrc/index.ts
) file exported as the Worker module. Confirm it exports a.js
handler and optionallyfetch
,scheduled
, orqueue
handlers.email - Set up the local dev environment. Run
to start the local development server. Confirm bindings (KV, R2, D1, Durable Objects) are available viawrangler dev
or--local
flags.--remote - Implement the fetch handler. Parse the incoming
URL and method. Route to handler functions using a router (Hono:Request
, itty-router:app.get('/path', handler)
). Return arouter.get('/path', handler)
with appropriate status, headers, and body.new Response() - Add middleware. Insert middleware functions for cross-cutting concerns: CORS headers (
), authentication (verify JWT or API key fromAccess-Control-Allow-Origin
header), request logging (timestamp, method, path, status), and error wrapping (try/catch returning 500 with error ID).Authorization - Implement Durable Object classes. Export a class extending
. ImplementDurableObject
for HTTP-based state access. Usefetch()
for persistent state. Usethis.ctx.storage.get/put/delete
for atomic multi-key operations. Bind the DO inthis.ctx.storage.transaction()
underwrangler.toml
.[durable_objects] - Wire data access patterns. For KV: use
/env.MY_KV.get(key)
. For R2: use.put(key, value, {expirationTtl})
/env.MY_BUCKET.get(key)
. For D1: use.put(key, body)
. Handle null returns (key not found) explicitly.env.MY_DB.prepare('SELECT ...').bind(params).all() - Implement scheduled handlers. Export a
function. Usescheduled(event, env, ctx)
to distinguish between multiple cron triggers. Useevent.cron
for async work that must complete after the handler returns.ctx.waitUntil() - Set up service bindings. In
, addwrangler.toml
with[[services]]
,binding
, andservice
. Call the bound service from Worker code viaenvironment
.env.MY_SERVICE.fetch(request) - Write tests. Use Miniflare for integration tests that exercise bindings. Use Vitest with
(unstable_dev) for unit tests. Mock external fetches withwrangler test
. Test Durable Objects by creating stubs viafetchMock
.env.MY_DO.get(id) - Debug runtime issues. Check CPU time limits (10ms free, 30ms paid for fetch; 30s for cron). Use
to stream live logs. Check for unhandled promise rejections that silently fail. Verifywrangler tail
is used for background work.ctx.waitUntil() - Deploy and verify. Run
. Hit the production URL and verify responses. Checkwrangler deploy
for errors in production traffic.wrangler tail --format=json
Decision rules
- Use Durable Objects when you need strongly consistent state or coordination between requests — KV is eventually consistent.
- Use KV for read-heavy, write-infrequent data (config, feature flags, cached API responses).
- Use R2 for binary data >25MB or when S3-compatible API access is needed.
- Use D1 for relational queries — but be aware of row limits and SQLite constraints.
- Use
for fire-and-forget work (analytics, logging) — do notctx.waitUntil()
it in the response path.await - Use service bindings over
to avoid network overhead and enforce internal-only access.fetch('https://other-worker.example.com') - Keep Worker code under the 1MB compressed size limit. Use dynamic imports or split into multiple Workers if approaching the limit.
Output requirements
- Worker code — the
/fetch
handler implementation with proper typing.scheduled - Durable Object class — if stateful logic is needed, the class with storage operations.
- wrangler.toml changes — any new bindings, DO declarations, or service binding configs.
- Test file — at least one test covering the primary handler path.
- Deployment verification — confirmed the Worker responds correctly at its production route.
References
- Workers runtime API: https://developers.cloudflare.com/workers/runtime-apis/
- Durable Objects: https://developers.cloudflare.com/durable-objects/
- Hono framework for Workers: https://hono.dev/docs/getting-started/cloudflare-workers
- Miniflare testing: https://miniflare.dev/
- Workers size and CPU limits: https://developers.cloudflare.com/workers/platform/limits/
references/preflight-checklist.md
Related skills
— DNS, WAF, Pages, R2/KV/D1 provisioning andcloudflare
configuration.wrangler.toml
— generic serverless architecture patterns.serverless-patterns
— alternative edge runtime platform.vercel
Anti-patterns
- Blocking the fetch handler with long-running synchronous work — use
for background tasks.ctx.waitUntil() - Using
to call another Worker in the same account instead of service bindings.fetch() - Storing large objects (>25MB) in KV — use R2 instead.
- Relying on global variables for request-scoped state — Workers may share isolates across requests.
- Not handling the
return fromnull
orKV.get()
— always check for missing keys.R2.get() - Writing Durable Object state without transactions when multiple keys must be atomically consistent.
Failure handling
- If
fails to start, check thatwrangler dev
bindings have valid IDs and thatwrangler.toml
is installed.node_modules - If a Worker exceeds CPU time limits, profile the handler to find expensive operations. Move heavy computation to a queued Worker or Durable Object alarm.
- If Durable Object storage operations throw, wrap in try/catch and return a 503 with a retry-after header.
- If
succeeds but the Worker returns errors, checkwrangler deploy
for unhandled exceptions and verify all environment bindings are provisioned in the target environment.wrangler tail - If the task is about platform configuration (DNS, WAF, caching) rather than Worker code, redirect to the
skill.cloudflare