cloudflare-workers
Assists with building and deploying applications on Cloudflare Workers edge computing platform. Use when working with Workers runtime, Wrangler CLI, KV, D1, R2, Durable Objects, Queues, or Hyperdrive. Trigger words: cloudflare, workers, edge functions, wrangler, KV, D1, R2, durable objects, edge computing.
Usage
Getting Started
- Install the skill using the command above
- Open your AI coding agent (Claude Code, Codex, Gemini CLI, or Cursor)
- Reference the skill in your prompt
- The AI will use the skill's capabilities automatically
Example Prompts
- "Review the open pull requests and summarize what needs attention"
- "Generate a changelog from the last 20 commits on the main branch"
Documentation
Overview
Cloudflare Workers enables building and deploying applications at the edge with sub-millisecond cold starts. The platform leverages the Workers runtime alongside storage services like KV, D1, R2, Durable Objects, and Queues to build globally distributed, low-latency applications.
Instructions
- When asked to create a Worker, scaffold with
wrangler initusing ES Module syntax (export default { fetch }) and setcompatibility_dateinwrangler.toml. - When configuring storage, recommend KV for read-heavy key-value caching, D1 for relational data with SQL, R2 for S3-compatible object storage with zero egress fees, and Durable Objects for strongly consistent state coordination.
- When setting up local development, use
wrangler devwith hot reload and local KV/D1/R2 simulation. - When deploying, use
wrangler deployand configure routes, bindings, and build settings inwrangler.toml. - When managing secrets, use
wrangler secret put KEY_NAMEand type bindings with anEnvinterface. - When optimizing performance, leverage the Cache API (
caches.default), Smart Placement, streaming responses withTransformStream, and HTMLRewriter for HTML transformation. - When handling background work, use
ctx.waitUntil()for fire-and-forget async tasks like analytics or logging. - When building AI features, use Workers AI for edge inference, AI Gateway for multi-provider management, and Vectorize for RAG pipelines.
Examples
Example 1: Create an edge API with KV caching
User request: "Set up a Cloudflare Worker that serves cached API responses from KV"
Actions:
- Scaffold a new Worker project with
wrangler init - Configure KV namespace binding in
wrangler.toml - Implement fetch handler with KV read/write and cache-control headers
- Test locally with
wrangler dev
Output: A Worker that checks KV for cached data, falls back to origin, and stores results in KV with TTL.
Example 2: Deploy a scheduled data sync Worker
User request: "Build a Worker that runs on a schedule to sync data from an external API into D1"
Actions:
- Configure Cron Trigger in
wrangler.toml - Create D1 database and migration with schema
- Implement
scheduled()handler that fetches external data and inserts into D1 - Use
ctx.waitUntil()for non-blocking cleanup tasks
Output: A Worker with cron-triggered data synchronization and D1 storage.
Guidelines
- Always set
compatibility_dateinwrangler.tomlto pin runtime behavior. - Use ES Module syntax (
export default) over Service Worker syntax. - Type all environment bindings with an
Envinterface for type safety. - Handle errors gracefully with proper HTTP status codes instead of unhandled exceptions.
- Use
ctx.waitUntil()for fire-and-forget async work that should not block the response. - Prefer D1 over KV for relational data; use KV for simple key-value caching.
- Set appropriate
Cache-Controlheaders and leverage Cloudflare's edge cache.
Information
- Version
- 1.0.0
- Author
- terminal-skills
- Category
- Development
- License
- Apache-2.0