Hosting a Site on Cloudflare R2
I get nerdsniped easily. So when someone on Hacker News mentioned trying to view a 417-megapixel Andromeda galaxy panorama, I sprung into action.
More specifically, I:
- Found a JavaScript library for rendering zoomable images on the web (OpenSeadragon) that looked promising
- Converted the 417-megapixel image into a Deep Zoom image, which is a
.dzi
file and some nested directories of images - Wrote a bare minimum
index.html
to serve as the entrypoint - Put it on a VM on GCE, behind Caddy for TLS.
I confirmed it worked, replied to the thread on HN, made a note to turn off the VM at some point, and promptly forgot all about it.
Well, until my GCP bill reminded me about it yesterday.
Making it Cheaper
While I could have just shut off the VM and called it a day, I also didn't want to contribute to link rot on the web. So I opted to just move it, in the off-chance others stumble across the comment.
But where to move it?
Everyone and their cousin has a static site platform, so there are plenty of options. Normally, I wouldn't think twice about tossing a static site on my little basement server, but interacting with the zoomable image can generate 10s of megabytes of network traffic. I've got a symmetric 1 Gbps home link, so it'd probably be fine, but I don't want to make it too easy for strangers to accidentally DoS my home while exploring the cosmos.
So I went with with Cloudflare R2. Honestly, I don't remember exactly why. Probably because of lack of egress fees, and the general simplicity of the service. Cloudflare Pages probably makes more sense for most use cases, but I didn't care about Git integration or previews, I really just wanted the dumbest possible thing.
The Migration
To actually get it up and running, I:
- Created an R2 bucket (docs)
- Created an API token (docs)
- Installed the
aws
CLI on the VM- Why the AWS CLI? Because R2 exposes an S3-compatible interface
- Authenticated the CLI with
aws configure --profile cloudflare
- And when prompted, entered all the authentication information from step 2.
- Copied all the static files (
index.html
and zoomable images files) from the VM to the R2 bucket withaws s3 sync
- See the appendix for the full command.
- Added a custom domain on the R2 bucket.
- This only works if DNS for the domain is managed by Cloudflare
- If there's already a DNS configuration for the domain in question, Cloudflare will helpfully remove the old record when adding the new one.
…and we're done! Wait a sec for the TLS certs to be generated, and let's have a look at our freshly migrated site:

That's...not right.
The issue here is that I asked for the "dumbest possible thing", and I got it. R2 literally just serves the file at the given path, and there's no file at /
[1], so we get a 404 Not Found
.
The solution is to add a URL rewrite rule. In the dashboard, that looks something like this:

This is basically saying, "Hey, when you receive a request at /
, load the R2 object stored at /index.html
"
And now we're actually done! Loading the site presents you with an unstyled, poorly margin-ed, zoomable image of the Andromeda galaxy:

It's a bit rough, sure, but the whole raison d'être was to allow folks to explore an otherwise less-accessible image. And at that, it succeeds.
Appendix
Some brief notes about how I actually created the zoomable image + integrated with OpenSeadragon.
Downloading and converting the image
# Download the full-resolution image from NASA
wget https://assets.science.nasa.gov/content/dam/science/missions/hubble/galaxies/andromeda/Hubble_M31Mosaic_2025_42208x9870_STScI-01JGY8MZB6RAYKZ1V4CHGN37Q6.jpg
# Install libvips: https://github.com/libvips/libvips
sudo apt-get install libvips-dev
# Use libvips to create the Deep Zoom image: https://www.libvips.org/API/current/Making-image-pyramids.html
vips dzsave Hubble_M31Mosaic_2025_42208x9870_STScI-01JGY8MZB6RAYKZ1V4CHGN37Q6.jpg hubble
A Simple index.html
to wrap OpenSeadragon
<!DOCTYPE html>
<html lang="en" style="width: 100%; height: 100%;">
<head>
<meta charset="utf-8" />
<meta http-equiv="x-ua-compatible" content="ie=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Andromeda 417 Megapixel Image</title>
</head>
<body style="width: 100%; height: 100%;">
<div id="openseadragon1" style="width: 100%; height: 100%;"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/openseadragon/5.0.1/openseadragon.min.js"></script>
<script type="text/javascript">
var viewer = OpenSeadragon({
id: "openseadragon1",
prefixUrl: "/",
tileSources: "/hubble.dzi"
});
</script>
</body>
</html>
I usually don't trust CDNs for serving JS[2], but this is a low-effort one-off project and the internet is generally full of far worse.
Copying static files from GCE VM -> R2 Bucket
aws s3 sync \
/usr/share/caddy \
s3://$R2_BUCKET_NAME/ \
--endpoint-url https://$CF_ACCOUNT_ID.r2.cloudflarestorage.com \
--profile cloudflare
-
I'm not even sure that there could be a file that maps to the root path. ↩
-
At the very least, I should include a Subresource Integrity attribute to thwart supply chain attacks. ↩