Skills cloudflare-workers
install
source · Clone the upstream repo
git clone https://github.com/TerminalSkills/skills
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/TerminalSkills/skills "$T" && mkdir -p ~/.claude/skills && cp -r "$T/skills/cloudflare-workers" ~/.claude/skills/terminalskills-skills-cloudflare-workers && rm -rf "$T"
manifest:
skills/cloudflare-workers/SKILL.mdsource content
Cloudflare Workers
Overview
Cloudflare Workers enables building and deploying applications at the edge with sub-millisecond cold starts. The platform leverages the Workers runtime alongside storage services like KV, D1, R2, Durable Objects, and Queues to build globally distributed, low-latency applications.
Instructions
- When asked to create a Worker, scaffold with
using ES Module syntax (wrangler init
) and setexport default { fetch }
incompatibility_date
.wrangler.toml - When configuring storage, recommend KV for read-heavy key-value caching, D1 for relational data with SQL, R2 for S3-compatible object storage with zero egress fees, and Durable Objects for strongly consistent state coordination.
- When setting up local development, use
with hot reload and local KV/D1/R2 simulation.wrangler dev - When deploying, use
and configure routes, bindings, and build settings inwrangler deploy
.wrangler.toml - When managing secrets, use
and type bindings with anwrangler secret put KEY_NAME
interface.Env - When optimizing performance, leverage the Cache API (
), Smart Placement, streaming responses withcaches.default
, and HTMLRewriter for HTML transformation.TransformStream - When handling background work, use
for fire-and-forget async tasks like analytics or logging.ctx.waitUntil() - When building AI features, use Workers AI for edge inference, AI Gateway for multi-provider management, and Vectorize for RAG pipelines.
Examples
Example 1: Create an edge API with KV caching
User request: "Set up a Cloudflare Worker that serves cached API responses from KV"
Actions:
- Scaffold a new Worker project with
wrangler init - Configure KV namespace binding in
wrangler.toml - Implement fetch handler with KV read/write and cache-control headers
- Test locally with
wrangler dev
Output: A Worker that checks KV for cached data, falls back to origin, and stores results in KV with TTL.
Example 2: Deploy a scheduled data sync Worker
User request: "Build a Worker that runs on a schedule to sync data from an external API into D1"
Actions:
- Configure Cron Trigger in
wrangler.toml - Create D1 database and migration with schema
- Implement
handler that fetches external data and inserts into D1scheduled() - Use
for non-blocking cleanup tasksctx.waitUntil()
Output: A Worker with cron-triggered data synchronization and D1 storage.
Guidelines
- Always set
incompatibility_date
to pin runtime behavior.wrangler.toml - Use ES Module syntax (
) over Service Worker syntax.export default - Type all environment bindings with an
interface for type safety.Env - Handle errors gracefully with proper HTTP status codes instead of unhandled exceptions.
- Use
for fire-and-forget async work that should not block the response.ctx.waitUntil() - Prefer D1 over KV for relational data; use KV for simple key-value caching.
- Set appropriate
headers and leverage Cloudflare's edge cache.Cache-Control