mirror of
https://github.com/cloudflare/vinext.git
synced 2026-05-09 08:25:34 +02:00
[GH-ISSUE #537] Alternating request failures (success, fail, success...) when using Drizzle + Postgres lazily evaluated in vinext / Cloudflare Workers #114
Labels
No labels
enhancement
enhancement
good first issue
help wanted
nextjs-tracking
nextjs-tracking
pull-request
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/vinext#114
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @sindhukhrisna on GitHub (Mar 14, 2026).
Original GitHub issue: https://github.com/cloudflare/vinext/issues/537
Description
I am deploying a Next.js application to Cloudflare Workers (using vinext). I am using Drizzle ORM with the postgres driver connecting to a Supabase database.
Because Cloudflare Workers do not allow asynchronous I/O (like establishing a database connection) in the global scope, the standard Drizzle initialization fails during the build step. To bypass this, I implemented a lazy initialization pattern using a JavaScript Proxy.
While this successfully builds on both Vercel and Cloudflare Workers, runtime behavior on Cloudflare is inconsistent. Specifically, requests fail exactly every other time:
(This happens regardless of whether I use Hyperdrive or a DB pooler URL).
Steps to Reproduce
Attempt 1: Standard Drizzle Setup (Fails during build)
If I use the standard initialization, the worker fails to build/deploy due to global scope I/O restrictions.
Build Error (Attempt 1)
Attempt 2: Lazy Evaluation using Proxy (Builds successfully, but runtime alternates failing)
To fix the build error, I used a Proxy to lazily instantiate the database connection only when a query is actually executed.
Expected Behavior
The lazy evaluated database connection should remain persistent and successfully resolve requests consistently on Cloudflare Workers, just as it does on Vercel.
Environment
drizzle-orm/postgres-js)postgres(Also tested without Hyperdrive using standard Pooler URLs via
process.env, but the alternating failure behavior remains exactly the same).@NathanDrake2406 commented on GitHub (Mar 14, 2026):
You should always create database clients inside your request handlers (fetch, queue, and similar), not in the global scope. Workers do not allow I/O across requests, and Hyperdrive's distributed connection pooling already solves for connection startup latency. Using a driver-level pool (such as new Pool() or createPool()) in the global script scope will leave you with stale connections that result in failed queries and hard errors.
https://developers.cloudflare.com/hyperdrive/concepts/connection-lifecycle/
@JamesbbBriz commented on GitHub (Mar 20, 2026):
We ran into the exact same alternating-failure pattern in production with Prisma v7 + `@prisma/adapter-pg` + Hyperdrive on Cloudflare Workers. After a lot of debugging, we found a reliable solution that's been stable in production for weeks.
Root Cause
The Proxy lazy-init pattern caches the DB client in module-global scope. In Workers, each isolate serves multiple requests, but the I/O context is per-request. A connection opened in Request A becomes invalid in Request B — hence the alternating success/fail pattern.
Solution: Per-Request Client with TTL Heuristic
Instead of caching the client globally forever, we use a 50ms TTL cache. The insight: within a single Workers request, all DB calls happen within ~50ms of the first one. After the TTL expires, the next request gets a fresh client.
```ts
// lib/db.ts — Works with both Drizzle and Prisma
let cachedClient: DbClient | null = null;
let cachedAt = 0;
const TTL_MS = 50; // Workers requests finish ~50ms after first DB hit
function getClient() {
const now = Date.now();
if (!cachedClient || now - cachedAt > TTL_MS) {
// Fresh client for each request boundary
cachedClient = createNewClient(env.HYPERDRIVE.connectionString);
cachedAt = now;
}
return cachedClient;
}
// For Prisma specifically:
function getPrisma() {
const now = Date.now();
if (!cachedClient || now - cachedAt > TTL_MS) {
const pool = new Pool({ connectionString: env.HYPERDRIVE.connectionString });
const adapter = new PrismaPg(pool);
cachedClient = new PrismaClient({ adapter });
cachedAt = now;
}
return cachedClient;
}
```
Why This Works
Production Context
We're running this pattern in OptiTalent — an HR platform with AI-powered candidate matching, deployed on Cloudflare Workers (Free plan, 10ms CPU limit). Stack: Vinext + Prisma v7 + Hyperdrive + R2 storage + NextAuth. The per-request pattern has been rock-solid with zero alternating failures across thousands of requests.
We also discovered that async connection pools (like `psycopg_pool.AsyncConnectionPool` on the Python backend side) have the same issue in Celery workers — the connection state leaks across task invocations. The fix is the same principle: per-unit-of-work client creation.
Happy to contribute a Prisma + Hyperdrive example to the vinext examples if that would be useful for the project.
@JamesbbBriz commented on GitHub (Mar 20, 2026):
Update: We've submitted #607 to add
getRequestStore()— a framework-level per-request store backed by vinext's existingAsyncLocalStorage. This eliminates the need for the TTL heuristic entirely:Would love feedback from the vinext team on the API design.
@JamesbbBriz commented on GitHub (Mar 28, 2026):
Update 2: The
getRequestStore()approach evolved intocacheForRequest()in #646 — a more focused API that caches factory results per-request using function identity as the key. This directly solves the lazy-init pattern issue described in this thread without needing manual store management.@aiddroid commented on GitHub (Mar 28, 2026):
Is there a working example of using HyperDrive with Postgres in Vinext?