[GH-ISSUE #537] Alternating request failures (success, fail, success...) when using Drizzle + Postgres lazily evaluated in vinext / Cloudflare Workers #114

Open
opened 2026-05-06 12:37:19 +02:00 by BreizhHardware · 5 comments

Originally created by @sindhukhrisna on GitHub (Mar 14, 2026).
Original GitHub issue: https://github.com/cloudflare/vinext/issues/537

Description

I am deploying a Next.js application to Cloudflare Workers (using vinext). I am using Drizzle ORM with the postgres driver connecting to a Supabase database.

Because Cloudflare Workers do not allow asynchronous I/O (like establishing a database connection) in the global scope, the standard Drizzle initialization fails during the build step. To bypass this, I implemented a lazy initialization pattern using a JavaScript Proxy.

While this successfully builds on both Vercel and Cloudflare Workers, runtime behavior on Cloudflare is inconsistent. Specifically, requests fail exactly every other time:

  • 1st Request: Success
  • 2nd Request: Fail
  • 3rd Request: Success
  • 4th Request: Fail

(This happens regardless of whether I use Hyperdrive or a DB pooler URL).


Steps to Reproduce

Attempt 1: Standard Drizzle Setup (Fails during build)

If I use the standard initialization, the worker fails to build/deploy due to global scope I/O restrictions.

import * as schema from "./schemas/schema-main";
import * as schema2 from "./schemas/schema-secondary";
import { drizzle } from 'drizzle-orm/postgres-js'
import { env } from "cloudflare:workers";
import postgres from 'postgres'

const poolDb = postgres(
  env.HYPERDRIVE.connectionString ?? "", {
  prepare: false,
});

export const db = drizzle({
  client: poolDb,
  schema: schema,
})

Build Error (Attempt 1)

16:48:27.291 ✘ [ERROR] A request to the Cloudflare API (/accounts/<ACCOUNT_ID>/workers/scripts/<PROJECT_NAME>/versions) failed.
16:48:27.291  Uncaught Error: Disallowed operation called within global scope. Asynchronous I/O (ex: fetch() or connect()), setting a timeout, and generating random values are not allowed within global scope. To fix this error, perform this operation within a handler. https://developers.cloudflare.com/workers/runtime-apis/handlers/
16:48:27.291    at null.<anonymous> (index.js:36463:41)
16:48:27.291   [code: 10021]

Attempt 2: Lazy Evaluation using Proxy (Builds successfully, but runtime alternates failing)

To fix the build error, I used a Proxy to lazily instantiate the database connection only when a query is actually executed.

import * as schema from "./schemas/schema-main";
import { drizzle } from 'drizzle-orm/postgres-js'
import postgres from 'postgres'

function isCloudflareWorkersRuntime(): boolean {
  return (
    typeof navigator !== "undefined" &&
    navigator.userAgent === "Cloudflare-Workers"
  );
}

// PRIMARY DB LAZY INIT
let _db: ReturnType<typeof drizzle> | null = null;
export const db = new Proxy({} as any, {
  get: (target, prop) => {
    if (!_db) {
      function connectionString() {
        if (isCloudflareWorkersRuntime()) {
          const { env } = require("cloudflare:workers");
          return env.HYPERDRIVE.connectionString ?? "";
        }
        return process.env.MY_PRIMARY_DB_URL ?? "";
      };
      
      const poolDb = postgres(
        connectionString(), {
        prepare: false,
      });
      
      _db = drizzle({
        client: poolDb,
        schema: schema,
      });
    }

    return (_db as any)[prop];
  }
}) as ReturnType<typeof drizzle>;

Expected Behavior

The lazy evaluated database connection should remain persistent and successfully resolve requests consistently on Cloudflare Workers, just as it does on Vercel.


Environment

  • Framework: Vinext (0.0.30) and NextJS (16.1.6)
  • Database: Supabase (Postgres)
  • ORM: Drizzle ORM (drizzle-orm/postgres-js)
  • Driver: postgres
  • Cloudflare Bindings: Hyperdrive

(Also tested without Hyperdrive using standard Pooler URLs via process.env, but the alternating failure behavior remains exactly the same).

Originally created by @sindhukhrisna on GitHub (Mar 14, 2026). Original GitHub issue: https://github.com/cloudflare/vinext/issues/537 ## Description I am deploying a Next.js application to Cloudflare Workers (using vinext). I am using Drizzle ORM with the postgres driver connecting to a Supabase database. Because Cloudflare Workers do not allow asynchronous I/O (like establishing a database connection) in the global scope, the standard Drizzle initialization fails during the build step. To bypass this, I implemented a lazy initialization pattern using a JavaScript Proxy. While this successfully builds on both Vercel and Cloudflare Workers, runtime behavior on Cloudflare is inconsistent. Specifically, requests fail exactly every other time: * 1st Request: Success * 2nd Request: Fail * 3rd Request: Success * 4th Request: Fail (This happens regardless of whether I use Hyperdrive or a DB pooler URL). --- ## Steps to Reproduce ### Attempt 1: Standard Drizzle Setup (Fails during build) If I use the standard initialization, the worker fails to build/deploy due to global scope I/O restrictions. ```ts import * as schema from "./schemas/schema-main"; import * as schema2 from "./schemas/schema-secondary"; import { drizzle } from 'drizzle-orm/postgres-js' import { env } from "cloudflare:workers"; import postgres from 'postgres' const poolDb = postgres( env.HYPERDRIVE.connectionString ?? "", { prepare: false, }); export const db = drizzle({ client: poolDb, schema: schema, }) ``` ### Build Error (Attempt 1) ``` 16:48:27.291 ✘ [ERROR] A request to the Cloudflare API (/accounts/<ACCOUNT_ID>/workers/scripts/<PROJECT_NAME>/versions) failed. 16:48:27.291 Uncaught Error: Disallowed operation called within global scope. Asynchronous I/O (ex: fetch() or connect()), setting a timeout, and generating random values are not allowed within global scope. To fix this error, perform this operation within a handler. https://developers.cloudflare.com/workers/runtime-apis/handlers/ 16:48:27.291 at null.<anonymous> (index.js:36463:41) 16:48:27.291 [code: 10021] ``` --- ### Attempt 2: Lazy Evaluation using Proxy (Builds successfully, but runtime alternates failing) To fix the build error, I used a Proxy to lazily instantiate the database connection only when a query is actually executed. ```ts import * as schema from "./schemas/schema-main"; import { drizzle } from 'drizzle-orm/postgres-js' import postgres from 'postgres' function isCloudflareWorkersRuntime(): boolean { return ( typeof navigator !== "undefined" && navigator.userAgent === "Cloudflare-Workers" ); } // PRIMARY DB LAZY INIT let _db: ReturnType<typeof drizzle> | null = null; export const db = new Proxy({} as any, { get: (target, prop) => { if (!_db) { function connectionString() { if (isCloudflareWorkersRuntime()) { const { env } = require("cloudflare:workers"); return env.HYPERDRIVE.connectionString ?? ""; } return process.env.MY_PRIMARY_DB_URL ?? ""; }; const poolDb = postgres( connectionString(), { prepare: false, }); _db = drizzle({ client: poolDb, schema: schema, }); } return (_db as any)[prop]; } }) as ReturnType<typeof drizzle>; ``` --- ## Expected Behavior The lazy evaluated database connection should remain persistent and successfully resolve requests consistently on Cloudflare Workers, just as it does on Vercel. --- ## Environment * **Framework:** Vinext (0.0.30) and NextJS (16.1.6) * **Database:** Supabase (Postgres) * **ORM:** Drizzle ORM (`drizzle-orm/postgres-js`) * **Driver:** `postgres` * **Cloudflare Bindings:** Hyperdrive (Also tested without Hyperdrive using standard Pooler URLs via `process.env`, but the alternating failure behavior remains exactly the same).
Author
Owner

@NathanDrake2406 commented on GitHub (Mar 14, 2026):

Create database clients inside your handlers

You should always create database clients inside your request handlers (fetch, queue, and similar), not in the global scope. Workers do not allow I/O across requests, and Hyperdrive's distributed connection pooling already solves for connection startup latency. Using a driver-level pool (such as new Pool() or createPool()) in the global script scope will leave you with stale connections that result in failed queries and hard errors.

https://developers.cloudflare.com/hyperdrive/concepts/connection-lifecycle/

<!-- gh-comment-id:4060854775 --> @NathanDrake2406 commented on GitHub (Mar 14, 2026): >Create database clients inside your handlers You should always create database clients inside your request handlers (fetch, queue, and similar), not in the global scope. Workers do not allow [I/O across requests](https://developers.cloudflare.com/workers/runtime-apis/bindings/#making-changes-to-bindings), and Hyperdrive's distributed connection pooling already solves for connection startup latency. Using a driver-level pool (such as new Pool() or createPool()) in the global script scope will leave you with stale connections that result in failed queries and hard errors. https://developers.cloudflare.com/hyperdrive/concepts/connection-lifecycle/
Author
Owner

@JamesbbBriz commented on GitHub (Mar 20, 2026):

We ran into the exact same alternating-failure pattern in production with Prisma v7 + `@prisma/adapter-pg` + Hyperdrive on Cloudflare Workers. After a lot of debugging, we found a reliable solution that's been stable in production for weeks.

Root Cause

The Proxy lazy-init pattern caches the DB client in module-global scope. In Workers, each isolate serves multiple requests, but the I/O context is per-request. A connection opened in Request A becomes invalid in Request B — hence the alternating success/fail pattern.

Solution: Per-Request Client with TTL Heuristic

Instead of caching the client globally forever, we use a 50ms TTL cache. The insight: within a single Workers request, all DB calls happen within ~50ms of the first one. After the TTL expires, the next request gets a fresh client.

```ts
// lib/db.ts — Works with both Drizzle and Prisma
let cachedClient: DbClient | null = null;
let cachedAt = 0;
const TTL_MS = 50; // Workers requests finish ~50ms after first DB hit

function getClient() {
const now = Date.now();
if (!cachedClient || now - cachedAt > TTL_MS) {
// Fresh client for each request boundary
cachedClient = createNewClient(env.HYPERDRIVE.connectionString);
cachedAt = now;
}
return cachedClient;
}

// For Prisma specifically:
function getPrisma() {
const now = Date.now();
if (!cachedClient || now - cachedAt > TTL_MS) {
const pool = new Pool({ connectionString: env.HYPERDRIVE.connectionString });
const adapter = new PrismaPg(pool);
cachedClient = new PrismaClient({ adapter });
cachedAt = now;
}
return cachedClient;
}
```

Why This Works

  • Same request: Multiple `getClient()` calls within 50ms → reuse same connection (fast, no overhead)
  • Next request: TTL expired → fresh client with new I/O context (no stale connection)
  • No alternating failures: Every request gets its own connection lifecycle

Production Context

We're running this pattern in OptiTalent — an HR platform with AI-powered candidate matching, deployed on Cloudflare Workers (Free plan, 10ms CPU limit). Stack: Vinext + Prisma v7 + Hyperdrive + R2 storage + NextAuth. The per-request pattern has been rock-solid with zero alternating failures across thousands of requests.

We also discovered that async connection pools (like `psycopg_pool.AsyncConnectionPool` on the Python backend side) have the same issue in Celery workers — the connection state leaks across task invocations. The fix is the same principle: per-unit-of-work client creation.

Happy to contribute a Prisma + Hyperdrive example to the vinext examples if that would be useful for the project.

<!-- gh-comment-id:4097268830 --> @JamesbbBriz commented on GitHub (Mar 20, 2026): We ran into the exact same alternating-failure pattern in production with **Prisma v7 + \`@prisma/adapter-pg\` + Hyperdrive** on Cloudflare Workers. After a lot of debugging, we found a reliable solution that's been stable in production for weeks. ## Root Cause The Proxy lazy-init pattern caches the DB client in module-global scope. In Workers, each isolate serves multiple requests, but the I/O context is **per-request**. A connection opened in Request A becomes invalid in Request B — hence the alternating success/fail pattern. ## Solution: Per-Request Client with TTL Heuristic Instead of caching the client globally forever, we use a **50ms TTL cache**. The insight: within a single Workers request, all DB calls happen within ~50ms of the first one. After the TTL expires, the next request gets a fresh client. \`\`\`ts // lib/db.ts — Works with both Drizzle and Prisma let cachedClient: DbClient | null = null; let cachedAt = 0; const TTL_MS = 50; // Workers requests finish ~50ms after first DB hit function getClient() { const now = Date.now(); if (!cachedClient || now - cachedAt > TTL_MS) { // Fresh client for each request boundary cachedClient = createNewClient(env.HYPERDRIVE.connectionString); cachedAt = now; } return cachedClient; } // For Prisma specifically: function getPrisma() { const now = Date.now(); if (!cachedClient || now - cachedAt > TTL_MS) { const pool = new Pool({ connectionString: env.HYPERDRIVE.connectionString }); const adapter = new PrismaPg(pool); cachedClient = new PrismaClient({ adapter }); cachedAt = now; } return cachedClient; } \`\`\` ## Why This Works - **Same request**: Multiple \`getClient()\` calls within 50ms → reuse same connection (fast, no overhead) - **Next request**: TTL expired → fresh client with new I/O context (no stale connection) - **No alternating failures**: Every request gets its own connection lifecycle ## Production Context We're running this pattern in [OptiTalent](https://app.optitalent.cc) — an HR platform with AI-powered candidate matching, deployed on Cloudflare Workers (Free plan, 10ms CPU limit). Stack: Vinext + Prisma v7 + Hyperdrive + R2 storage + NextAuth. The per-request pattern has been rock-solid with zero alternating failures across thousands of requests. We also discovered that **async connection pools** (like \`psycopg_pool.AsyncConnectionPool\` on the Python backend side) have the same issue in Celery workers — the connection state leaks across task invocations. The fix is the same principle: per-unit-of-work client creation. Happy to contribute a Prisma + Hyperdrive example to the vinext examples if that would be useful for the project.
Author
Owner

@JamesbbBriz commented on GitHub (Mar 20, 2026):

Update: We've submitted #607 to add getRequestStore() — a framework-level per-request store backed by vinext's existing AsyncLocalStorage. This eliminates the need for the TTL heuristic entirely:

import { getRequestStore } from "vinext/request-store";

export function getDb(connectionString: string) {
  const store = getRequestStore();
  let db = store.get("db");
  if (!db) {
    db = drizzle(connectionString);  // or new PrismaClient(...)
    store.set("db", db);
  }
  return db;
  // Automatically cleaned up when request ends
}

Would love feedback from the vinext team on the API design.

<!-- gh-comment-id:4097608426 --> @JamesbbBriz commented on GitHub (Mar 20, 2026): **Update**: We've submitted [#607](https://github.com/cloudflare/vinext/pull/607) to add `getRequestStore()` — a framework-level per-request store backed by vinext's existing `AsyncLocalStorage`. This eliminates the need for the TTL heuristic entirely: ```ts import { getRequestStore } from "vinext/request-store"; export function getDb(connectionString: string) { const store = getRequestStore(); let db = store.get("db"); if (!db) { db = drizzle(connectionString); // or new PrismaClient(...) store.set("db", db); } return db; // Automatically cleaned up when request ends } ``` Would love feedback from the vinext team on the API design.
Author
Owner

@JamesbbBriz commented on GitHub (Mar 28, 2026):

Update 2: The getRequestStore() approach evolved into cacheForRequest() in #646 — a more focused API that caches factory results per-request using function identity as the key. This directly solves the lazy-init pattern issue described in this thread without needing manual store management.

<!-- gh-comment-id:4147244717 --> @JamesbbBriz commented on GitHub (Mar 28, 2026): **Update 2**: The `getRequestStore()` approach evolved into `cacheForRequest()` in #646 — a more focused API that caches factory results per-request using function identity as the key. This directly solves the lazy-init pattern issue described in this thread without needing manual store management.
Author
Owner

@aiddroid commented on GitHub (Mar 28, 2026):

Is there a working example of using HyperDrive with Postgres in Vinext?

<!-- gh-comment-id:4147256105 --> @aiddroid commented on GitHub (Mar 28, 2026): Is there a working example of using HyperDrive with Postgres in Vinext?
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/vinext#114
No description provided.