![]() ![]() You might also be wondering why I didn’t just upload the local data to Amazon S3. Second, Arq encrypts data before uploading to Glacier, and so I’d forever be dependent on Arq to restore it. it seems to prefer the local data to stick around. First, as far as I can tell, Arq is tool to backup, not move, data i.e. (You might be wondering why I didn’t just leave Arq’s data in place, and ditch the source drive. ![]() ![]() So with that in mind, I formulated a plan to make it happen: Using my other server-a fat-piped Mac mini hosted with one of the Mac mini colocators-I would use Arq to restore my 300 GB of data there, and then re-upload it all using Transmit to an Amazon S3 bucket, configured with a lifecycle rule to immediately transfer it all into Glacier. The risk I’d be exposed to is the probability that Glacier loses a file, multiplied by the probability that I’d actually need that particular file. Then a few weeks ago, I got to thinking that I’d probably be fine just storing these files exclusively in Amazon Glacier, and doing away with the local hard drive altogether. Since hard drives are prone to dying, especially in a hot climate like southern Spain, I’ve been using the excellent Arq from Haystack Software to keep an offsite copy of these files stored in Amazon Glacier-costing me about $3 per month. All in all, the drive contains some 300 GB of data. These are files that I do not expect to need in the future, but at the same time wouldn’t really be happy about losing-for example, snapshots of filesystems of computers I’ve retired, some source media, etc. For years, I’ve been accumulating an archive of files at home on an external drive connected to a Mac mini. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |