This week Amazon released its new digital preservation platform Glacier. It is similar to the S3 storage service, but optimized for long-term, low-access storage. You pay a penny per GB per month, and you accept that access will be slow (four hours or more) and expensive (12 cents/GB, with free access to 5% of your content each month). I’ve been storing family digital assets on S3 as a remote backup, and Glacier will save me a few bucks each month. I’ll only need to access the content if my local backups fail, so I can accept the barriers to access. And, of course, at work we’re interested in low-cost off-site replication. So, let’s check it out.
The initial offering from Amazon has a web management console and Java and .NET SDKs and a REST API, but no user-friendly client. Third parties are starting to release clients, though, and there’s enough there to work with. Within a few days there will be more.
Getting started is easy: just activate Glacier in your AWS account. The data model is simple: “vaults” contain “archives”, which as far as I’m concerned are simply files. You can create vaults through the web console and tie them into Amazon’s SNS notification service, but that’s as far as you can get; to upload a file you need client.
I started with the glacierFreezer command-line client, which is based on the Java SDK. It makes use of an Amazon SimpleDB domain to store information about your archives, so you need to create one for it first. Then gather your access key and secret key (from the “Security Credentials” tab in the web console), and run it:
Up the file goes, and the results are stored by glacierFreezer in the SimpleDB domain:
That’s a lot of information you’re going to need to keep track of, because Glacier won’t keep track of it for you. If you want to be able to restore an archive to a local file with the same name as it had before you uploaded it, you need to remember the mapping of the archiveID to the fileName.
At this point I was blocked again, since glacierFreezer doesn’t yet have the functionality to do anything with an archive in a vault (give it a few days). The day after the upload, when Glacier had done its daily job of generating inventories, I could at least see in the web console that the vault had been populated:
We’ve got an archive! The file I uploaded was only a few bytes, so the 32kb size presumably represents the block size of Glacier’s file system.
This morning I looked again for Glacier clients, and found that the Node.js project node-awssum had added Glacier to the list of supported Amazon APIs. I’ve been meaning to play with Node.js for a while so I jumped on it. I installed Node.js and its package manager npm according to these instructions, then installed node-awssum (and the required package fmt) with a lovely simple
The Glacier examples that come with node-awssum cover fetching vault descriptions and such, but not the job-oriented tasks that I need at this point. To fetch an inventory of a vault, or an archive from that vault, you need to initiate a job and wait for Glacier to let you know it’s done (which they say takes four hours). Not to worry, though, node-awssum is easy to work with. I copied one of the examples and created a script “inventory-retrieval.js” like this:
I run that and get the following output:
So, I’ve successfully created the job. And now I wait, savoring the full experience of Glacier’s slow retrieval which is going to save me so much money compared to S3. I’ll update this post when I get Glacier’s notification that it’s complete. Meanwhile, I’ll contemplate David Rosenthal’s analysis of Glacier’s costs.
The job took just over four hours; I didn’t get a notification (have to look into that, probably my misconfiguration) but the job description shows the time. The next step is retrieve the job output, and it turns out that node-awssum hasn’t finished this function: it generates the uri for a job description rather than the job output. The code was easy to patch so I was able to retrieve the inventory:
So it does have my original file name, but only in a text description. This response gives me the archive ID of my file (which I had anyway because glacierFreezer saved it for me - nice to see that they agree). Just to close the loop I’ve initiated the archive-retrieval job.
I posted an issue about the problem with get-job-output in node-awssum. Heard a couple of hours later that it’s been fixed in master; tried it, it works. God I love open-source.
Update 3 (next day)
The archive retrieval job finished after 4 1/2 hours, and I can retrieve my file. I get a sha256 hash in the header to let me verify the content. For some reason Amazon doesn’t pay attention to my byte range request if I try to retrieve less than the whole file; perhaps it’s because the file is so short, just 36 bytes. I’ll try that again when I’ve uploaded something bigger. And it turns out the notifications were coming through to my email after all: dunno how I overlooked them. So all is good.
It’s easy to imagine a full-scale retrieval process that would manage the initiation of retrieval jobs, monitor the notification stream (which uses Amazon’s Simple Notification Service and can therefore push notifications using a variety of protocols), and fetch the output when it receives notification that a job is ready. Amazon says that output is available for at least 24 hours after the job completes, so you would want to manage the chunking of jobs in such a way as to avoid retrieving more content than you can download in a day, taking into account the somewhat convoluted calculations required to avoid overrunning your 5% monthly free download allowance.
I’m currently using S3 for offsite backup of my personal digital archive, and moving it to Glacier is a no-brainer. I don’t expect ever to retrieve this stuff, since I keep multiple local copies: it’s fire insurance. After a disaster, I’d be willing to pay the download costs to retrieve the family photos. In my professional role (where this is all speculative), I’d take David Rosenthal’s concerns seriously and avoid lock-in: as long as we’ve got local copies, we could move our content to a competitor of Amazon’s without incurring the retrieval costs.
Finally, my first experience with node.js has been great, and I’ll definitely be putting some time into learning more.