[PR #332] [MERGED] Fix fetch cache key collisions for Request and FormData bodies #484

Closed
opened 2026-05-06 13:08:20 +02:00 by BreizhHardware · 0 comments

📋 Pull Request Information

Original PR: https://github.com/cloudflare/vinext/pull/332
Author: @JaredStowell
Created: 3/7/2026
Status: Merged
Merged: 3/8/2026
Merged by: @james-elicx

Base: mainHead: jstowell/fix-cache-key-collisions


📝 Commits (7)

  • 0ac97ab Fix fetch cache key collisions for Request and FormData bodies
  • 1373156 Add additional tests
  • 7887a36 Address codex feedback by reading incrementally with early exit on content-length check + regression test
  • 44210a8 Add regression test for already-consumed body
  • d659ed1 Harden fetch cache keying for Request form bodies + tests
  • e2241aa Preserve FormData insertion order + add file name/type into cache key payload + tests
  • 0951604 Fix PR feedback + additional regression tests

📊 Changes

2 files changed (+532 additions, -32 deletions)

View changed files

📝 packages/vinext/src/shims/fetch-cache.ts (+161 -29)
📝 tests/fetch-cache.test.ts (+371 -3)

📄 Description

Summary

Fix fetch cache key generation so cached requests are keyed by the actual effective request body, including when the body is provided on a Request object.

This also fixes ambiguous FormData serialization that could cause distinct payloads to collapse into the same cache entry.

Problem

Vinext’s fetch cache key generation did not fully account for the request body in all supported call shapes.

Request bodies were ignored

The cache key logic correctly merged headers from both input and init, but it only serialized init.body. If the body lived on a Request object, vinext effectively treated the request as body-less for cache-key purposes.

Example:

await fetch(
  new Request("https://api.example.com/search", {
    method: "POST",
    body: JSON.stringify({ query: "alpha" }),
    headers: { "content-type": "application/json" },
  }),
  { next: { revalidate: 60 } },
)

await fetch(
  new Request("https://api.example.com/search", {
    method: "POST",
    body: JSON.stringify({ query: "bravo" }),
    headers: { "content-type": "application/json" },
  }),
  { next: { revalidate: 60 } },
)

Before this change, both requests could produce the same cache key even though the payloads were different.

That makes cached POST-style fetches unsafe: a response generated for one payload can be reused for another payload.

Concrete failure modes include:

  • a search endpoint returning results for the wrong query
  • a filtered API response being reused for the wrong filter set
  • application data keyed by request body being served from the wrong cached entry

Multi-value FormData could collide

FormData values were serialized by joining them with commas, which is ambiguous.

Example:

const formA = new FormData()
formA.append("name", "a,b")
formA.append("name", "c")
// serialized as: "name=a,b,c"

const formB = new FormData()
formB.append("name", "a")
formB.append("name", "b,c")
// also serialized as: "name=a,b,c"

These are different payloads, but they produced the same cache-key fragment.

Root Cause

The issue was in the fetch cache key builder, not in cache storage itself.

  • collectHeaders() already handled Request inputs correctly
  • buildFetchCacheKey() used serializeBody(init)
  • serializeBody() only looked at init.body
  • Request bodies were therefore omitted unless duplicated in init
  • FormData entries were flattened with comma-joining, which is not injective

In other words, the cache key did not always represent the true effective request.

What Changed

Fetch cache keying

  • Include Request object bodies in cache key generation
  • Support body extraction from Request inputs without mutating the original fetch behavior
  • Preserve the original request body for the underlying network fetch

FormData serialization

  • Replace ambiguous comma-joined serialization with structured serialization
  • Serialize per-key value lists in a format that preserves boundaries and ordering semantics
  • Keep existing oversized-body protections in place

Tests

Add regression coverage for:

  • different Request bodies producing distinct cache entries
  • identical Request bodies reusing the same cache entry
  • ambiguous comma-containing multi-value FormData payloads not colliding
  • Request bodies still being forwarded intact after cache-key generation

Examples

Example 1: POST search requests

Before:

await fetch(
  new Request("https://api.example.com/search", {
    method: "POST",
    body: JSON.stringify({ query: "alpha" }),
    headers: { "content-type": "application/json" },
  }),
  { next: { revalidate: 60 } },
)

await fetch(
  new Request("https://api.example.com/search", {
    method: "POST",
    body: JSON.stringify({ query: "bravo" }),
    headers: { "content-type": "application/json" },
  }),
  { next: { revalidate: 60 } },
)

These could collide and reuse the wrong cached response.

After:

  • the request body is included in the cache key
  • each distinct payload gets its own cache entry

Example 2: Multi-value form submissions

Before:

const formA = new FormData()
formA.append("name", "a,b")
formA.append("name", "c")

const formB = new FormData()
formB.append("name", "a")
formB.append("name", "b,c")

These serialized to the same cache-key fragment.

After:

  • FormData values are serialized in a structured format
  • these payloads no longer collide

Why This Approach

This change fixes the root cause while keeping the existing caching model intact.

The principle is simple:

  • semantically different requests must not share a cache entry
  • semantically identical requests should still reuse the same cache entry

The patch stays within the current fetch cache architecture and only changes key generation and regression coverage.

Files Changed

  • packages/vinext/src/shims/fetch-cache.ts
  • tests/fetch-cache.test.ts

Test Plan

Ran targeted regression coverage for the affected surface:

pnpm dlx vitest run tests/fetch-cache.test.ts

🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.

## 📋 Pull Request Information **Original PR:** https://github.com/cloudflare/vinext/pull/332 **Author:** [@JaredStowell](https://github.com/JaredStowell) **Created:** 3/7/2026 **Status:** ✅ Merged **Merged:** 3/8/2026 **Merged by:** [@james-elicx](https://github.com/james-elicx) **Base:** `main` ← **Head:** `jstowell/fix-cache-key-collisions` --- ### 📝 Commits (7) - [`0ac97ab`](https://github.com/cloudflare/vinext/commit/0ac97ab1832d2286cec53fb5edb3dadd2ec89884) Fix fetch cache key collisions for Request and FormData bodies - [`1373156`](https://github.com/cloudflare/vinext/commit/13731560dc51a662e65292b142064ee7dc384e04) Add additional tests - [`7887a36`](https://github.com/cloudflare/vinext/commit/7887a36f9e4e1009ba103df0b3ffa5970136db2d) Address codex feedback by reading incrementally with early exit on content-length check + regression test - [`44210a8`](https://github.com/cloudflare/vinext/commit/44210a8cbb647254c56bb82d0b006cc140937ac4) Add regression test for already-consumed body - [`d659ed1`](https://github.com/cloudflare/vinext/commit/d659ed10447e2e5b421ea7214b9013904d670646) Harden fetch cache keying for Request form bodies + tests - [`e2241aa`](https://github.com/cloudflare/vinext/commit/e2241aa8ae27c917b60706efc8e86b592a8c93b9) Preserve FormData insertion order + add file name/type into cache key payload + tests - [`0951604`](https://github.com/cloudflare/vinext/commit/0951604ca1f009551f56474453df109266c3db7f) Fix PR feedback + additional regression tests ### 📊 Changes **2 files changed** (+532 additions, -32 deletions) <details> <summary>View changed files</summary> 📝 `packages/vinext/src/shims/fetch-cache.ts` (+161 -29) 📝 `tests/fetch-cache.test.ts` (+371 -3) </details> ### 📄 Description ## Summary Fix fetch cache key generation so cached requests are keyed by the actual effective request body, including when the body is provided on a `Request` object. This also fixes ambiguous `FormData` serialization that could cause distinct payloads to collapse into the same cache entry. ## Problem Vinext’s fetch cache key generation did not fully account for the request body in all supported call shapes. ### `Request` bodies were ignored The cache key logic correctly merged headers from both `input` and `init`, but it only serialized `init.body`. If the body lived on a `Request` object, vinext effectively treated the request as body-less for cache-key purposes. Example: ```ts await fetch( new Request("https://api.example.com/search", { method: "POST", body: JSON.stringify({ query: "alpha" }), headers: { "content-type": "application/json" }, }), { next: { revalidate: 60 } }, ) await fetch( new Request("https://api.example.com/search", { method: "POST", body: JSON.stringify({ query: "bravo" }), headers: { "content-type": "application/json" }, }), { next: { revalidate: 60 } }, ) ``` Before this change, both requests could produce the same cache key even though the payloads were different. That makes cached POST-style fetches unsafe: a response generated for one payload can be reused for another payload. Concrete failure modes include: - a search endpoint returning results for the wrong query - a filtered API response being reused for the wrong filter set - application data keyed by request body being served from the wrong cached entry ### Multi-value `FormData` could collide `FormData` values were serialized by joining them with commas, which is ambiguous. Example: ```ts const formA = new FormData() formA.append("name", "a,b") formA.append("name", "c") // serialized as: "name=a,b,c" const formB = new FormData() formB.append("name", "a") formB.append("name", "b,c") // also serialized as: "name=a,b,c" ``` These are different payloads, but they produced the same cache-key fragment. ## Root Cause The issue was in the fetch cache key builder, not in cache storage itself. - `collectHeaders()` already handled `Request` inputs correctly - `buildFetchCacheKey()` used `serializeBody(init)` - `serializeBody()` only looked at `init.body` - `Request` bodies were therefore omitted unless duplicated in `init` - `FormData` entries were flattened with comma-joining, which is not injective In other words, the cache key did not always represent the true effective request. ## What Changed ### Fetch cache keying - Include `Request` object bodies in cache key generation - Support body extraction from `Request` inputs without mutating the original fetch behavior - Preserve the original request body for the underlying network fetch ### `FormData` serialization - Replace ambiguous comma-joined serialization with structured serialization - Serialize per-key value lists in a format that preserves boundaries and ordering semantics - Keep existing oversized-body protections in place ### Tests Add regression coverage for: - different `Request` bodies producing distinct cache entries - identical `Request` bodies reusing the same cache entry - ambiguous comma-containing multi-value `FormData` payloads not colliding - `Request` bodies still being forwarded intact after cache-key generation ## Examples ### Example 1: POST search requests Before: ```ts await fetch( new Request("https://api.example.com/search", { method: "POST", body: JSON.stringify({ query: "alpha" }), headers: { "content-type": "application/json" }, }), { next: { revalidate: 60 } }, ) await fetch( new Request("https://api.example.com/search", { method: "POST", body: JSON.stringify({ query: "bravo" }), headers: { "content-type": "application/json" }, }), { next: { revalidate: 60 } }, ) ``` These could collide and reuse the wrong cached response. After: - the request body is included in the cache key - each distinct payload gets its own cache entry ### Example 2: Multi-value form submissions Before: ```ts const formA = new FormData() formA.append("name", "a,b") formA.append("name", "c") const formB = new FormData() formB.append("name", "a") formB.append("name", "b,c") ``` These serialized to the same cache-key fragment. After: - `FormData` values are serialized in a structured format - these payloads no longer collide ## Why This Approach This change fixes the root cause while keeping the existing caching model intact. The principle is simple: - semantically different requests must not share a cache entry - semantically identical requests should still reuse the same cache entry The patch stays within the current fetch cache architecture and only changes key generation and regression coverage. ## Files Changed - `packages/vinext/src/shims/fetch-cache.ts` - `tests/fetch-cache.test.ts` ## Test Plan Ran targeted regression coverage for the affected surface: ```bash pnpm dlx vitest run tests/fetch-cache.test.ts ``` --- <sub>🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.</sub>
BreizhHardware 2026-05-06 13:08:20 +02:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/vinext#484
No description provided.