Cloudflare

Automate Cloudflare edge services and integrate security and performance optimizations into your stack

Cloudflare is a community skill for building and deploying applications on the Cloudflare platform, covering Workers, Pages, KV storage, D1 database, R2 object storage, and edge network configuration patterns.

What Is This?

Overview

Cloudflare provides patterns for building serverless applications on Cloudflare's edge network. It covers Workers for running JavaScript and TypeScript functions at the edge with low latency, Pages for deploying static sites and full-stack applications with automatic builds, KV storage for globally distributed key-value data access, D1 for SQLite-compatible databases running at the edge, and R2 for S3-compatible object storage without egress fees. The skill enables developers to build globally distributed applications that run close to users with minimal infrastructure management.

Who Should Use This

This skill serves developers deploying serverless applications on Cloudflare Workers, teams building full-stack sites with Cloudflare Pages and edge functions, and engineers using KV, D1, or R2 for edge-native data storage.

Why Use It?

Problems It Solves

Traditional server deployments require managing infrastructure and scaling capacity. Centralized servers add latency for globally distributed users. Object storage egress fees accumulate significantly with high bandwidth usage. Database queries from edge functions to centralized databases introduce round-trip delays.

Core Highlights

Workers execute code at over 300 edge locations worldwide. KV provides eventually consistent global key-value storage. D1 runs SQL queries at the edge with SQLite compatibility. R2 stores objects with zero egress fees and S3 API support.

How to Use It?

Basic Usage

// Cloudflare Worker
export default {
  async fetch(
    request: Request,
    env: {
      KV: KVNamespace;
      DB: D1Database;
    }
  ): Promise<Response> {
    const url = new URL(
      request.url);

    if (url.pathname ===
        '/api/config') {
      const value =
        await env.KV.get(
          'app-config');
      return Response.json(
        JSON.parse(
          value ?? '{}'));
    }

    if (url.pathname ===
        '/api/users') {
      const { results } =
        await env.DB.prepare(
          'SELECT id, name, email'
          + ' FROM users'
          + ' ORDER BY name'
          + ' LIMIT 50'
        ).all();
      return Response.json(
        results);
    }

    return new Response(
      'Not found',
      { status: 404 });
  },
};

Real-World Examples

// Worker with R2 and D1
export default {
  async fetch(
    request: Request,
    env: {
      BUCKET: R2Bucket;
      DB: D1Database;
    }
  ): Promise<Response> {
    const url = new URL(
      request.url);

    // File upload
    if (request.method === 'PUT'
        && url.pathname
          .startsWith(
            '/files/')) {
      const key = url.pathname
        .slice(7);
      await env.BUCKET.put(
        key, request.body, {
          httpMetadata: {
            contentType:
              request.headers.get(
                'content-type')
              ?? 'application/'
                + 'octet-stream',
          },
        });

      await env.DB.prepare(
        'INSERT INTO files '
        + '(key, size, uploaded)'
        + ' VALUES (?, ?, ?)'
      ).bind(
        key,
        request.headers.get(
          'content-length'),
        new Date().toISOString()
      ).run();

      return Response.json(
        { key, status:
          'uploaded' });
    }

    // File download
    if (url.pathname
        .startsWith(
          '/files/')) {
      const key = url.pathname
        .slice(7);
      const obj =
        await env.BUCKET
          .get(key);
      if (!obj) {
        return new Response(
          'Not found',
          { status: 404 });
      }
      return new Response(
        obj.body, {
          headers: {
            'content-type':
              obj.httpMetadata
                ?.contentType
              ?? 'application/'
                + 'octet-stream',
          },
        });
    }

    return new Response(
      'Not found',
      { status: 404 });
  },
};

Advanced Tips

Use Durable Objects for strongly consistent state when KV eventual consistency is insufficient. Bind multiple services in wrangler.toml for composing Workers with KV, D1, and R2 in a single handler. Use Cache API within Workers for caching expensive computations at the edge.

When to Use It?

Use Cases

Build a globally distributed API with edge-native data access using Workers and D1. Deploy a static site with server-side functions on Cloudflare Pages. Create a file hosting service using R2 for storage and Workers for access control.

Related Topics

Edge computing, serverless functions, Cloudflare Workers, distributed storage, and static site deployment.

Important Notes

Requirements

Cloudflare account with Workers plan for serverless functions. Wrangler CLI for local development and deployment. Node.js runtime for the Wrangler development server.

Usage Recommendations

Do: use KV for read-heavy configuration data that tolerates eventual consistency. Choose D1 for relational queries that need SQL support at the edge. Use R2 for large files to avoid egress costs on frequently accessed assets.

Don't: use KV for data requiring strong consistency across concurrent writes. Store large binary objects in D1 when R2 is designed for object storage. Deploy Workers that exceed the CPU time limit with blocking computations.

Limitations

Workers have CPU time limits per request that restrict computation-heavy tasks. KV is eventually consistent with propagation delays across regions. D1 is optimized for read-heavy workloads and may not suit write-intensive applications.