Skip to main content

Tor Installation OS X Step Three

I recently wrote a letter to Tor, containing some thoughts about Tor Installation OS X ‘Step Three’, with regards to the verification of Tor installations on OS X when installed using Homebrew. At their request, I have created a ticket; if you have any thoughts or improvements, please feel free to add your comments there. :)

Dear Tor,

On ‘Step Three’, it says

Unfortunately, Homebrew does not come with integrated verification for downloads, and anyone could submit a modified Tor! Currently, we don’t have good instructions on how to verify the Tor download on Mac OSX. If you think you do, please let us know!

Is this up-to-date? Homebrew contains the ability to checksum both bottles and sources packages, and these appear to be specified in the build recipe for Tor:

Modifying my local /usr/local/Library/Formula/tor.rb and purposely corrupting the checksums seemed to yield the desired behaviour (after clearing the caches), with the bottle installation being skipped because of the failed checksum (, and then the source installation failing because of that failed checksum (

Admittedly, this does not make it easy for the user to verify the installation themselves, and requires a large amount of trust in Homebrew. However, presuming the trust in the package manager itself installing from the locally downloaded package, perhaps it is possible for the concerned user to skip the bottle installation and force a source installation (slower, of course, but not massively so) using something like:

brew install tor --build-from-source

Then, observing the output for the location of the cache (which could also be guessed from the version reported in brew info tor), fetching the signature from the Tor website, and verifying:

curl -o tor-sig.asc
gpg --verify tor-sig.asc /Library/Caches/Homebrew/tor-

However, this also requires GPG, of course, which in turn can be installed using Homebrew or GPGTools (binary package), so perhaps this doesn’t make the user much more at ease. Perhaps the latter consideration doesn’t cause too much worry, however, as it appears to be in the instructions for verifying signatures on OS X ( Manually verifying the SHA checksum, too, however (which is what Homebrew appears to do), could give a little more confidence:

shasum -a 256 /Library/Caches/Homebrew/tor-

However, unlike for the SHA 256 sums provided for the browser (, I cannot seem to find a list of these. But then, arguably it’s a small download anyway, so if we don’t mind the duplication of the download work:

curl | shasum -a 256

This matches the version Homebrew cached, which increases confidence.

By this point, however, we could just as easily warm the source cache for Homebrew ourselves, which would block installation if the checksum does not match that expected by Homebrew:

curl -o /Library/Caches/Homebrew/tor-

This does, of course, require knowledge of which version is about to be installed, but brew info tor suffices for that.

I suppose it comes down to whether I trust Homebrew in its installation, and whether I trust its embedded checksums to be accurate. For the former, I probably shouldn’t be using it for installations, although admittedly verifying my Homebrew installation itself is a whole other issue (although here, too, confidence could be gained by using the knowledge of it being a Git repository and doing something like cd $(brew --prefix) && git remote -v && git pull, but also presumes the --prefix output is accurate, etc.). If I don’t trust its embedded checksums to be accurate, perhaps an approach balancing concern with usability would be:

brew info tor
# observe stable version
curl "$BREW_TOR_VERSION.tar.gz" -o "/Library/Caches/Homebrew/tor-$BREW_TOR_VERSION.tar.gz"
curl "$BREW_TOR_VERSION.tar.gz.asc" -o tor-sig.asc
gpg --verify tor-sig.asc "/Library/Caches/Homebrew/tor-$BREW_TOR_VERSION.tar.gz"
# observe good signature, leaving checksum checking to Homebrew, as we’ve supplied the source
brew install tor --build-from-source
# observe that cache was used and nothing exploded

Although, it might be more convenient to use brew fetch for the source.

Perhaps there may be a better way to accomplish this, particularly the last step. But hopefully, it is better than nothing for the concerned user.


secure static websites using AWS S3, SSL, AWS CloudFront, and AWS Route 53

I still very much enjoy static websites. Building websites with dynamically-created pages, databases, and more potential themes than I will ever have time to browse is all very well, but sometimes I just want something to be rather more bicycle than car. There is something pleasing about creating a design, generating some pages, uploading them to something like AWS S3, and then forgetting all about it whilst you wonder how many distinct sounds can be heard at any one time along the entirety of the banks of some river beside which you happen to be strolling.

It is hard to beat a static website in terms of hosting cost or uptime. Certainly, I can use any of a number of quality hosting companies, perhaps with a higher cost, or I can set things up myself with a reliable load-balancer or two and whatever method of monitoring I am favouring this season. And if I’ve got something requiring such then that is exactly what I should do, and I do it gladly. But, glorious though a finely-crafted fountain pen is, sometimes a pencil is not only sufficient but also beautiful in its simplicity.

These days, security is perhaps becoming ever more important, as the percentage of our lives committed to the swirling pixels grows and our concerns about privacy and its violation increase. But security is also becoming easier to provision, and cheaper. So, without further ado, I present a brief recipe for one particular approach to hosting secure static websites. I am indebted to various manuals, blog posts, and other digitally-distributed missives from which I have benefitted through various stages of originally putting this into practice—but I credit none of them, largely because I can’t remember when and what and who I read. This specific setup is unashamedly AWS-centric, not because it can’t be done with other services, but because this group of services is both cheap and, if personification of digital services be permitted, cheerful.

the content, which in AWS S3 doth dwell

Obviously, you have lovely content. In AWS S3, create two Buckets. One for www., one for non-www.. From when I used to host insecure static websites, I name these exactly according to the domain name. Let us presume you have acquired from a purveyor of domain names.

  • Bucket the First:

  • Permissions—bucket policy allowing views from the honest public (consult the Sample Bucket Policies)

  • Static Website HostingEnable website hosting; Index Document: index.html; make a note of Endpoint for later

  • Bucket the Second:

  • Static Website HostingRedirect all requests to another host name; Redirect all requests to:

Add lots of juicy content to Bucket the First, and leave Bucket the Second empty, which will redirect properly to Bucket the First (only the insecure route, but I’m alright with that, at least for now).

the certificate, for which i used OpenSSL

Now to generate the SSL certificate, which will be uploaded to AWS CloudFront I’m using OpenSSL to make this, but variations are possible. You’ll need to obtain a signed certificate from a Certificate Authority. The cost can vary between that of a few coffees per year to basically a whole coffee machine, complete with a grinder for the beans. One of the cheapest forms of validation in which you are the least vetted is simply a verification that you own the domain. Namecheap is one of many places which sell these, as well as the whole-bean varieties. I think they’re also pretty good for obtaining domains.

openssl genrsa -out 2048
openssl req -new -key -out

Keep the .key very secret; don’t even tell your dog. Send the .csr when asked by the merchant of SSLs, and verify your domain in whichever manner pleases them. When you’ve done this, you should be the proud owner of an SSL certificate. If you’re not because you’re not proud, it is fine to proceed, but if you’re not because you don’t have a certificate, there’s not much point in proceeding until you have one.

You probably received, or were linked to, a number of files containing Root and Intermediate certificates from the Certificate Authority. These need to be combined into a single file, and the order in which the combination is made is important. You probably received a link to specific instructions when you received your certificate. For Comodo PositiveSSL, I had to do something like:

cat COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt >

the distribution, which AWS CloudFront distributes

Next is to distribute your website using the AWS CloudFront CDN. We’ll point its source at the AWS S3 Bucket, and set it to only support secure connections, and of those only those new enough to support SNI. It is important to understand that preventing people from being able to view your site over an unsecured connection is rather opinionated; you might like to allow both, but I prefer to gently shove people towards greater security. It is, perhaps, even more important to select SNI within AWS CloudFront, unless you want to pay an elephant-sized fee and forgo coffee for many years to come.

First, you’ll need to upload your shiny new SSL certificate to AWS IAM. I only seemed to be able to do this via the API command-line, which has to be set up separately; please wave at me if you find it can be done within the UI.

aws iam upload-server-certificate --server-certificate-name --certificate-body file://www_example_com.crt --private-key file:// --certificate-chain file:// --path /cloudfront/

Forget not the file:// prefixes; things will get weird if you do.

  • a new Web Distribution:

  • Origin SettingsOrigin Domain Name: for this, do not use the autocomplete address! instead, use the public-facing Endpoint listed in AWS S3 Static Website Hosting

  • Default Cache Behaviour SettingsViewer Protocol Policy: Redirect HTTP to HTTPS

  • Distribution SettingsAlternate Domain Names (CNAMES):; SSL Certificate: Custom SSL Certificate (stored in AWS IAM):; Custom SSL Client Support: Only Clients that Support Server Name Indication (SNI) (very important! unless you don’t even like coffee and don’t mind spending the hundreds of dollars); Default Root Object: index.html

Be not dismayed if the creation of your new Distribution takes a while to complete; it’s copying it all over the world! Make a cup of coffee and contemplate rainbows and other colourful things in the interim.

the roadsigns, which AWS Route 53 erects

Assuming you’ve set up AWS Route 53 for your domain, which is admittedly quite an assumption, all that remains is to point the records in the right directions. Believe it or not, I’m not getting paid for my frequent mentions of AWS. The only reason I’m linking so much is I like consistency and the effect of the pretty, underlined visual distinction.

  • Record Sets for Hosted Zone:

  • A RecordType: A - IPv4 address; Alias: Yes; Alias Target: Bucket the Second Endpoint

  • CNAME RecordType: CNAME - Canonical name; Alias: No; Value: AWS CloudFront Distribution Domain Name

This sets up to redirect to even if you’ve got MX Records or others at the Zone Apex, to redirect to, and all of those https:// to be delivered in glorious security.

the self-doubt, which is free

The question could well be asked of why to secure content which is both static and publicly-accessible. If you’re not concerned about having URLs encrypted, or tempted by Google’s HTTPS as a ranking signal, this could be considered a matter of taste. But also, it feels pleasant to play even a small part in making the internet a more secure place. And it’s an interesting journey—or at least, I found it so.

ryespy 1.1.1 Ruby gem released

ryespy: Redis Sidekiq/Resque IMAP, FTP, Amazon S3, Google Cloud Storage, Google Drive, Rackspace Cloud Files listener.

I announce Ruby gem ryespy 1.1.1—a release fixing the Google Drive listener to increase the max files from the default 100 to 1000 (see Changelog).


  • [#2] Google Drive listener fix changing max files from 100 to 1000; outstanding [#3] to remove limit entirely (thank you, @Lewis-Clayton)



—some people draw inspiration for comics from real events in their lives.
it helps if you know a lot of stickpeople.


—9 out of 10 customers said they’d recommend us!
—you do realise that figure means nothing without a sample size? how many did you ask?
—um. 10.
the 10th said no.
we became disheartened!