Adding TLS to your S3 static site with Cloudfront

Step n+1 in the process of moving josh.sg to a static site (steps 0 through n are right here) was to enable TLS encryption. Gone are the days when you had to fork out squillions of dollars for an SSL certificate if you wanted that fancy padlock in the address bar; these days, Amazon Certificate Manager hands them out for free. If you’re not wedded to the AWS ecosystem, the good folks at Let’s Encrypt hand out free certs as well, with their Clarkes-Third-Law-level magic configuration software. And starting from July, Chrome will puke if your site’s not TLS-encrypted.

The tricky bit is that for whatever reason (and if I have any readers at AWS, can you pop this in as a feature request?), S3 static hosting doesn’t natively support TLS encryption. At all. If you want the padlock, you have to add two extra services to your repertoire: Cloudfront for the TLS, and AWS Certificate Manager for the certificate handling. It works beautifully once it’s up and running, but the process is finicky, filled with gotchas, and fails in non-obvious ways if you make even a single mistake. If you’re going through the same steps, hopefully you can learn from my many fails.

Step 0: have a working S3 static site

For starters, you’ll want a working statically-served site out of an Amazon S3 bucket. You’ll obtain your certificate, configure Cloudfront, and then at the last minute, flick the switch using Route 53 or your DNS provider of choice (though really, if you’re already so committed to AWS that you’re using S3, EC2, and Cloudfront, you’re probably using Route 53 as well). If something goes wrong, you can flick back to your “naked” S3 site in a few seconds and try again.

Step 1: obtain a TLS certificate

From AWS Certificate Manager, select “Request a Certificate”, and choose a public certificate. You’ll want to add at least two names to this certificate:

  • Your root domain (example.com)
  • A wildcard for subdomains (*.example.com)

The wildcard ensures that anyone who hits www.example.com sees TLS encryption as well.

If you’re using Route 53, you can use DNS validation; there’s a magic button later in the process that automatically adds the validation fields to your Route 53 config.

Step 1a: push a new version of your site to S3 with an https:// base URL

If you’re using some sort of CMS or static-site-generator like Hugo, you’ll need to regenerate your site with an https base url instead of http. Do that now, and push the site to S3 before you go to the next step. We’re about to mirror copies of your static site across the world; if you mirror a broken site from S3 to Cloudfront, it’s a (small) pain in the arse to fix. Better to update the site now and save time.

Step 2: set up a Cloudfront distribution

Using Cloudfront as an “S3 TLS gateway” is kind of an off-label use of the product. We’re just trying to whack TLS in front of an S3 bucket, not mirror the content in eleventy-three edge locations across the world, and frankly I’m a bit surprised AWS doesn’t offer TLS natively on S3 buckets with web hosting enabled. But here we are.

A Cloudfront distribution is the configuration for a set of files mirrored to Cloudfront: where to get the originals from, where to mirror them to, and, most importantly for us, how they should be served in response to requests.

Here’s how you’ll want to configure your Cloudfront distro:

  • Origin domain name should be your S3 bucket URL, not the URL of your site! For example: http://josh.sg.s3-website-us-east-1.amazonaws.com is right; http://josh.sg is wrong.
  • Viewer Protocol Policy: Pick redirect HTTP to HTTPS.

Under Distribution Settings:

  • Price Class: Pick Use only US, Canada, and Europe. You’re not doing this to optimise your delivery to Asia, so pick the cheapest option.
  • Alternate domain names: Add your root domain (example.com), the www. prefix (www.example.com), and any other domain names that you’ll be serving off this site. This makes the DNS configuration a lot easier further down the line.
  • SSL certificate: Pick Custom SSL certificate, and then choose the certificate you created in step 1.

Step 3: Coffee

Once you hit “go” on the Cloudfront distribution, AWS mirrors copies of your sites to all of the Cloudfront edge locations that you specified in step 2. This’ll take a few minutes, so go and brew yourself a pot of coffee.

Step 4: Re-point DNS from S3 to Cloudfront

Once the Cloudfront distribution’s status is Deployed, hop over to Route 53 and re-point the A records (just the A records) for your site from the S3 bucket to Cloudfront.

If the above steps have gone smoothly, once the DNS changes have propagated, you should be able to hit your site and have it served over TLS.

Step n: Troubleshooting

I have run into literally all of these in the process of getting josh.sg secured. Learn from my fail.

  • If yoursite.com works but www.yoursite.com doesn’t, or vice versa: you’ve forgotten either the base URL or the wildcard URL in your TLS cert. Apply for a new cert (step 1) with both, then reconfigure the Cloudfront distribution to use the new cert;
  • If your homepage renders, but files stored in subdirectories give access-denied errors: you’ve probably used your site’s regular domain name instead of the S3 URL in the Cloudfront distribution. Delete the Cloudfront distribution and go back to step 2;
  • If your homepage sort-of renders, but your CSS doesn’t load, and/or if files in subdirectories time out: you’ve probably forgotten to update your site’s base URL to https. Invalidate the cached files in Cloudfront (see below) and go back to step 2a.
  • If you’re trying to repoint the DNS from the S3 bucket to Cloudfront, and your Cloudfront distro isn’t showing up in the list of default targets, you’ve probably forgotten to add the alternate domain names to the Cloudfront distribution configuration. Add those in (see step 2), then continue from step 5.

Step n+1: Fixing a bad push

If you push a site to S3 that still thinks the base URL is http://example.com instead of https://example.com, or if you (ahem) accidentally push all of your working directories to S3 instead of the rendered site and watch in horror as those get mirrored to Cloudfront, it’s easy to fix.

Cloudfront normally expires pages from its cache after 24 hours, less if they’re infrequently accessed. But you can force Cloudfront to prematurely evict all your pages from its cache using the “Invalidate” function.

To invalidate your whole site, browse to your Cloudfront distribution, select the Invalidations tab, hit “Create Invalidation”, and type * in the “Invalidation Paths” field.

AWS does charge for manual invalidations, but they only charge when you exceed 1,000 invalidation paths in a month. Invalidating * or /*, even if that means thousands of files get evicted, only counts as one invalidation path, so don’t be afraid of screwing up.

In conclusion

Once you’ve done this, you should have a nicely padlocked static site, being served off S3 via Cloudfront. Ping me on Twitter with your feedback.