Benchmarking Cloudflare R2 HEAD Requests from Cloudflare Workers
Motivation
Why would I do this? For my latest project, mapvoyage, I need to serve text files (wikitext to be specific), directly to the user if possible, as well as in a separate function select one wikitext file to display from a list of wikidata ids that may or may not have an associated wikitext file. Latency is important and I'd like the data to be edge-available if possible. R2 with workers seemed ideal here, as the client can download the files from R2 directly and I can lookup the urls from a separate worker, which hopefully shouldn't have a long latency for reading them.
The Benchmark
Luckily this was simple enough that ChatGPT could generate a simple script. I could upload this Cloudflare Workers via the dashboard directly and only needed to connect it to the R2 bucket. The files are just the single character 1 to 5. 4.txt is excluded as it didn't upload for some reason.
export default {
async fetch(request, env) {
// Define the list of object keys to test.
const keys = ["1.txt", "2.txt", "3.txt", "5.txt"];
// Start all HEAD requests in parallel and measure each one's latency.
const promises = keys.map(async (key) => {
const start = performance.now();
const headResponse = await env.MY_R2_BUCKET.head(key);
const end = performance.now();
const latency = end - start;
return { key, latency, found: !!headResponse };
});
const results = await Promise.all(promises);
// Filter only successful responses.
const successful = results.filter(result => result.found);
const maxLatency = successful.length > 0
? Math.max(...successful.map(r => r.latency))
: 0;
// Build a report with each key's latency and the maximum.
const output = results.map(r =>
`Key: ${r.key} - ${r.found ? r.latency.toFixed(2) + " ms" : "Not Found"}`
).join("\n") +
`\nMaximum latency: ${maxLatency.toFixed(2)} ms`;
return new Response(output, { status: 200 });
}
};
The Results
The region of the bucket is Eastern Europe, for a worker called from Europe I got the following responses:
Key: 1.txt - 379.00 ms
Key: 2.txt - 383.00 ms
Key: 3.txt - 375.00 ms
Key: 5.txt - 663.00 ms
Maximum latency: 663.00 ms
Uncached/First Request
Key: 1.txt - 52.00 ms
Key: 2.txt - 47.00 ms
Key: 3.txt - 48.00 ms
Key: 5.txt - 54.00 ms
Maximum latency: 54.00 ms
Cached/Subsequent Requests