I still very much enjoy static websites. Building websites with dynamically-created pages, databases, and more potential themes than I will ever have time to browse is all very well, but sometimes I just want something to be rather more bicycle than car. There is something pleasing about creating a design, generating some pages, uploading them to something like AWS S3, and then forgetting all about it whilst you wonder how many distinct sounds can be heard at any one time along the entirety of the banks of some river beside which you happen to be strolling.
It is hard to beat a static website in terms of hosting cost or uptime. Certainly, I can use any of a number of quality hosting companies, perhaps with a higher cost, or I can set things up myself with a reliable load-balancer or two and whatever method of monitoring I am favouring this season. And if I’ve got something requiring such then that is exactly what I should do, and I do it gladly. But, glorious though a finely-crafted fountain pen is, sometimes a pencil is not only sufficient but also beautiful in its simplicity.
These days, security is perhaps becoming ever more important, as the percentage of our lives committed to the swirling pixels grows and our concerns about privacy and its violation increase. But security is also becoming easier to provision, and cheaper. So, without further ado, I present a brief recipe for one particular approach to hosting secure static websites. I am indebted to various manuals, blog posts, and other digitally-distributed missives from which I have benefitted through various stages of originally putting this into practice—but I credit none of them, largely because I can’t remember when and what and who I read. This specific setup is unashamedly AWS-centric, not because it can’t be done with other services, but because this group of services is both cheap and, if personification of digital services be permitted, cheerful.
the content, which in AWS S3 doth dwell
Obviously, you have lovely content. In AWS S3, create two Buckets. One for www., one for non-www.. From when I used to host insecure static websites, I name these exactly according to the domain name. Let us presume you have acquired example.com from a purveyor of domain names.
Bucket the First: www.example.com
Permissions—bucket policy allowing views from the honest public (consult the Sample Bucket Policies)
Static Website Hosting—Enable website hosting; Index Document: index.html; make a note of Endpoint for later
Bucket the Second: example.com
Static Website Hosting—Redirect all requests to another host name; Redirect all requests to: www.example.com
Add lots of juicy content to Bucket the First, and leave Bucket the Second empty, which will redirect properly to Bucket the First (only the insecure route, but I’m alright with that, at least for now).
the certificate, for which i used OpenSSL
Now to generate the SSL certificate, which will be uploaded to AWS CloudFront I’m using OpenSSL to make this, but variations are possible. You’ll need to obtain a signed certificate from a Certificate Authority. The cost can vary between that of a few coffees per year to basically a whole coffee machine, complete with a grinder for the beans. One of the cheapest forms of validation in which you are the least vetted is simply a verification that you own the domain. Namecheap is one of many places which sell these, as well as the whole-bean varieties. I think they’re also pretty good for obtaining domains.
openssl genrsa -out www.example.com.key 2048 openssl req -new -key www.example.com.key -out www.example.com.csr
Keep the .key very secret; don’t even tell your dog. Send the .csr when asked by the merchant of SSLs, and verify your domain in whichever manner pleases them. When you’ve done this, you should be the proud owner of an SSL certificate. If you’re not because you’re not proud, it is fine to proceed, but if you’re not because you don’t have a certificate, there’s not much point in proceeding until you have one.
You probably received, or were linked to, a number of files containing Root and Intermediate certificates from the Certificate Authority. These need to be combined into a single file, and the order in which the combination is made is important. You probably received a link to specific instructions when you received your certificate. For Comodo PositiveSSL, I had to do something like:
cat COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > www_example_com.ca-bundle.pem
the distribution, which AWS CloudFront distributes
Next is to distribute your website using the AWS CloudFront CDN. We’ll point its source at the AWS S3 Bucket, and set it to only support secure connections, and of those only those new enough to support SNI. It is important to understand that preventing people from being able to view your site over an unsecured connection is rather opinionated; you might like to allow both, but I prefer to gently shove people towards greater security. It is, perhaps, even more important to select SNI within AWS CloudFront, unless you want to pay an elephant-sized fee and forgo coffee for many years to come.
First, you’ll need to upload your shiny new SSL certificate to AWS IAM. I only seemed to be able to do this via the API command-line, which has to be set up separately; please wave at me if you find it can be done within the UI.
aws iam upload-server-certificate --server-certificate-name www.example.com --certificate-body file://www_example_com.crt --private-key file://www.example.com.key --certificate-chain file://www_example_com.ca-bundle.pem --path /cloudfront/www.example.com/
Forget not the file:// prefixes; things will get weird if you do.
a new Web Distribution:
Origin Settings—Origin Domain Name: for this, do not use the autocomplete address! instead, use the public-facing Endpoint listed in AWS S3 Static Website Hosting
Default Cache Behaviour Settings—Viewer Protocol Policy: Redirect HTTP to HTTPS
Distribution Settings—Alternate Domain Names (CNAMES): www.example.com; SSL Certificate: Custom SSL Certificate (stored in AWS IAM): www.example.com; Custom SSL Client Support: Only Clients that Support Server Name Indication (SNI) (very important! unless you don’t even like coffee and don’t mind spending the hundreds of dollars); Default Root Object: index.html
Be not dismayed if the creation of your new Distribution takes a while to complete; it’s copying it all over the world! Make a cup of coffee and contemplate rainbows and other colourful things in the interim.
the roadsigns, which AWS Route 53 erects
Assuming you’ve set up AWS Route 53 for your domain, which is admittedly quite an assumption, all that remains is to point the records in the right directions. Believe it or not, I’m not getting paid for my frequent mentions of AWS. The only reason I’m linking so much is I like consistency and the effect of the pretty, underlined visual distinction.
Record Sets for Hosted Zone: example.com
example.com A Record—Type: A - IPv4 address; Alias: Yes; Alias Target: Bucket the Second Endpoint
www.example.com CNAME Record—Type: CNAME - Canonical name; Alias: No; Value: AWS CloudFront Distribution Domain Name
This sets up http://example.com to redirect to http://www.example.com even if you’ve got MX Records or others at the Zone Apex, http://www.example.com/some-path-less-travelled to redirect to https://www.example.com/some-path-less-travelled, and all of those https:// to be delivered in glorious security.
the self-doubt, which is free
The question could well be asked of why to secure content which is both static and publicly-accessible. If you’re not concerned about having URLs encrypted, or tempted by Google’s HTTPS as a ranking signal, this could be considered a matter of taste. But also, it feels pleasant to play even a small part in making the internet a more secure place. And it’s an interesting journey—or at least, I found it so.
ryespy: Redis Sidekiq/Resque IMAP, FTP, Amazon S3, Google Cloud Storage, Google Drive, Rackspace Cloud Files listener.
I announce Ruby gem ryespy 1.1.1—a release fixing the Google Drive listener to increase the max files from the default 100 to 1000 (see Changelog).
- [#2] Google Drive listener fix changing max files from 100 to 1000; outstanding [#3] to remove limit entirely (thank you, @Lewis-Clayton)