1-888-317-7920 info@2ndwatch.com

A Refresher Course on Disaster Recovery with AWS

IT infrastructure is the hardware, network, services and software required for enterprise IT. It is the foundation that enables organizations to deliver IT services to their users. Disaster recovery (DR) is preparing for and recovering from natural and people-related disasters that impact IT infrastructure for critical business functions. Natural disasters include earthquakes, fires, etc. People-related disasters include human error, terrorism, etc. Business continuity differs from DR as it involves keeping all aspects of the organization functioning, not just IT infrastructure.

When planning for DR, companies must establish a recovery time objective (RTO) and recovery point objective (RPO) for each critical IT service. RTO is the acceptable amount of time in which an IT service must be restored. RPO is the acceptable amount of data loss measured in time. Companies establish both RTOs and RPOs to mitigate financial and other types of loss to the business. Companies then design and implement DR plans to effectively and efficiently recover the IT infrastructure necessary to run critical business functions.

For companies with corporate datacenters, the traditional approach to DR involves duplicating IT infrastructure at a secondary location to ensure available capacity in a disaster. The key downside is IT infrastructure must be bought, installed and maintained in advance to address anticipated capacity requirements. This often causes IT infrastructure in the secondary location to be over-procured and under-utilized. In contrast, Amazon Web Services (AWS) provides companies with access to enterprise-grade IT infrastructure that can be scaled up or down for DR as needed.

The four most common DR architectures on AWS are:

  • Backup and Restore ($) – Companies can use their current backup software to replicate data into AWS. Companies use Amazon S3 for short-term archiving and Amazon Glacier for long-term archiving. In the event of a      disaster, data can be made available on AWS infrastructure or restored from the cloud back onto an on-premise server.
  • Pilot Light ($$) – While backup and restore are focused on data, pilot light includes applications. Companies only provision core infrastructure needed for critical applications. When disaster strikes, Amazon Machine Images (AMIs) and other automation services are used to quickly provision the remaining environment for production.
  • Warm Standby ($$$) – Taking the Pilot Light model one step further, warm standby creates an active/passive cluster. The minimum amount of capacity is provisioned in AWS. When needed, the environment rapidly scales up to meet full production demands. Companies receive (near) 100% uptime and (near) no downtime.
  • Hot Standby ($$$$) – Hot standby is an active/active cluster with both cloud and on-premise components to it. Using weighted DNS load-balancing, IT determines how much application traffic to process in-house and on AWS.      If a disaster or spike in load occurs, more or all of it can be routed to AWS with auto-scaling.

In a non-disaster environment, warm standby DR is not scaled for full production, but is fully functional. To help adsorb/justify cost, companies can use the DR site for non-production work, such as quality assurance, ing, etc. For hot standby DR, cost is determined by how much production traffic is handled by AWS in normal operation. In the recovery phase, companies only pay for what they use in addition and for the duration the DR site is at full scale. In hot standby, companies can further reduce the costs of their “always on” AWS servers with Reserved Instances (RIs).

Smart companies know disaster is not a matter of if, but when. According to a study done by the University of Oregon, every dollar spent on hazard mitigation, including DR, saves companies four dollars in recovery and response costs. In addition to cost savings, smart companies also view DR as critical to their survival. For example, 51% of companies that experienced a major data loss closed within two years (Source: Gartner), and 44% of companies that experienced a major fire never re-opened (Source: EBM). Again, disaster is not a ready of if, but when. Be ready.

-Josh Lowry, General Manager – West

Facebooktwittergoogle_pluslinkedinmailrss

Storage Gateway with Amazon Web Services

Backup and disaster recovery often require solutions that add complexity and additional cost to properly synchronize your data and systems.  Amazon Web Services™ (AWS) helps drive this cost and complexity with a number of services.  Amazon S3 provides a highly durable (99.999999999%) storage platform for your backups.  This service backs up your data to multiple availability zones (AZ) to provide you the ultimate peace of mind for your data.  AWS also provides an ultra-low cost service for long-term cold storage that is aptly named Glacier.  At $0.01 per GB / month this service will force you to ask, “Why am I not using AWS today?”

AWS has developed the AWS Storage Gateway to make your backups secure and efficient.  For only $125 per backup location per month, you will have a robust solution that provides the following features:

  • Secure transfers of all data to AWS S3 storage
  • Compatible with your current architecture – there is no need to call up your local storage vendor for a special adapter or firmware version to use Storage Gateway
  • Designed for AWS – this provides a seamless integration of your current environment to AWS services

AWS Storage Gateway and Amazon EC2 (snapshots of machine images) together provide a simple cloud-hosted DR solution.   Amazon EC2 allows you to quickly launch images of your production environment in AWS when you need them.  The AWS Storage Gateway seamlessly orchestrates with S3 to provide you a robust backup and disaster recovery solution that meets anyone’s budget.

-Matt Whitney, Sales Executive

Facebooktwittergoogle_pluslinkedinmailrss

AWS S3-Glacier Lifecycle Management

Not long ago, 2nd Watch published an article on Amazon Glacier. In it Caleb provides a great primer on the capabilities of Glacier and the cost benefits.  Now that he’s taken the time to explain what it is, let’s talk about possible use cases for Glacier and how to avoid some of the pitfalls.  As Amazon says, “Amazon Glacier is optimized for data that is infrequently accessed and for which retrieval times of several hours are suitable.”  What immediately comes to mind are backups, but most AWS customers do this through EBS snapshots, which can restore in minutes, while a Glacier recall can take hours.  Rather than looking at the obvious, consider these use cases for Glacier Archival storage: compliance (regulatory or internal process), conversion of paper archives, and application retirement.

Compliance often forces organizations to retain records and backups for years, customers often mention a seven year retention policy based on regulatory compliance.  In seven years, a traditional (on premise) server can be replaced at least once, operating systems are upgraded several times, applications have been upgraded or modified, and backup hardware/software has been changed.  Add to that all the media that would need to be replaced/upgraded and you have every IT department’s nightmare – needing to either maintain old tape hardware or convert all the old backup tapes to the new hardware format (and hope too many haven’t degraded over the years).  Glacier removes the need to worry about the hardware, the media, and the storage fees (currently 1¢ per GB/month in US-East) are tiny compared to the cost of media and storage on premise.  Upload your backup file(s) to S3, setup a lifecycle policy, and you have greatly simplified your archival process while keeping regulatory compliance.

So how do customers create these lifecycle policies so their data automatically moves to Glacier?  From the AWS Management Console, once you have an S3 bucket there is a Property called ‘Lifecycle’ that can manage the migration to Glacier (and possible deletion as well).  Add a rule (or rules) to the S3 bucket that can migrate files based on a filename prefix, how long since their creation date, or how long from an effective date (perhaps 1 day from the current date for things you want to move directly to Glacier).  For the example above, perhaps customers take backup files, move them to S3, then have them move to Glacier after 30 days and delete after 7 years.

Lifecycle Rule

Before we go too far and setup lifecycles, however, one major point should be highlighted: Amazon charges customers based on GB/month stored in Glacier and a one-time fee for each file moved from S3 to Glacier.  Moving a terabyte of data from S3 to Glacier could cost little more than $10/month in storage fees, however, if that data is made up of 1k log files, the one-time fee for that migration can be more than $50,000!  While this is an extreme example, consider data management before archiving.  If at all possible, compress the files into a single file (zip/tar/rar), upload those compressed files to S3 and then archive to Glacier.

-Keith Homewood, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

Amazon Glacier: Your 1¢ Cloud-based Backup Solution

If you haven’t heard of Amazon Glacier, you need to check it out. As its name implies, you can think of Glacier as “frozen” storage. When considering the speed of EBS and S3, Glacier by comparison moves glacially slow. Consider Glacier as essentially a cloud-based archival solution that works similarly to old-style tape backup. In the past, backups first ran to tape, then were stored locally in case of immediate access requirements, and were then taken off-site once a certain date requirement was met (once a week, once a month, etc.). Glacier essentially works as the last stage of that process.

When a snapshot in S3, for instance, gets to be a month old, you can instruct AWS to automatically move that object to Glacier. Writing it to Glacier happens pretty much immediately, though being able to see that object on your Glacier management console can take between 3-5 hours. If you need it back, you’ll issue a request, but that can take up to 24 hours to be resolved. Amazon hasn’t released the exact mechanics of how they’re storing the data on their end, but large tape libraries are a good bet since they jive with one of Glacier’s best features: its price. That’s only $0.01 per gigabyte. Its second best feature is 11 nines worth of “durability” (which refers to data loss) and 4 nines worth of “reliability” (which refers to data availability). That’s 99.999999999% for those who like the visual.

Configuring Glacier, while a straightforward process, will require some technical savvy on your part. Amazon has done a nice job of representing how Glacier works in an illustration:

glacier1

As you can see, the first step is to download the Glacier software development kit (SDK), which is available for Java or .NET.  Once you’ve got that, you’ll need to create your vault. This is an easy step that starts with accessing your Glacier management console, selecting your service region (Glacier is automatically redundant across availability zones in your region, which is part of the reason for its high durability rating), naming your vault, and hitting the create button. I’m using the sandbox environment that comes with your AWS account to take these screen shots, so the region is pre-selected. In a live environment, this would be a drop-down menu providing you with region options.

glacier2

The vault is where you’ll store your objects, which equate to a single file, like a document or a photo. But instead of proceeding directly to vault creation from the screen above, be sure and set up your vault’s Amazon Simple Notification Service (SNS) parameters.

glacier3

Notifications can be created for a variety of operations and delivered to systems managers or applications using whatever protocol you need (HTML for a homegrown web control or email for your sys admin, for example). Once you create the vault from the notifications screen, you’re in your basic Glacier management console:

glacier4

Uploading and downloading documents is where it gets technical. Currently, the web-based console above doesn’t have tools for managing archive operations like you’d find with S3. Uploading, downloading, deleting or any other operation will require programming in whichever language for which you’ve downloaded the SDK. You can use the AWS Identity and Access Management (IAM) service to attach user permissions to vaults and manage billing through your Account interface, but everything else happens at the code level. However, there are third-party Glacier consoles out there that can handle much of the development stuff in the background while presenting you with a much simpler management interface, such as CloudBerry Explorer 3.6. We’re not going to run through code samples here, but Amazon has plenty of resources for this off its Sample Code & Libraries site.

On the upside, while programming for Glacier operations is difficult for non-programmers, if you’ve got the skills, it provides a lot of flexibility in designing your own archive and backup processes. You can assign vaults to any of the various backup operations being run by your business and define your own archive schedules. Essentially, that means you can configure a hierarchical storage management (HSM) architecture that natively incorporates AWS.

For example, imagine a typical server farm running in EC2. At the first tier, it’s using EBS for immediate, current data transactions, similar to a hard disk or SAN LUN. When files in your EBS store have been unused for a period of time or if you’ve scheduled them to move at a recurring time (like with server snapshots), those files can be automatically moved to S3. Access between your EC2 servers and S3 isn’t quite as fast as EBS, but it’s still a nearline return on data requests. Once those files have lived on S3 for a time, you can give them a time to live (TTL) parameter after which they are automatically archived on Glacier. It’ll take some programming work, but unlike with standard on-premises archival solutions, which are usually based on a proprietary architecture, using Java or .NET means you can configure your storage management any way you like – for different geographic locations, different departments, different applications, or even different kinds of data.

And this kind of HSM design doesn’t have to be entirely cloud-based. Glacier works just as well with on-premises data, applications, or server management. There is no minimum or maximum amount of data you can archive with Glacier, though individual archives can’t be less than 1 byte or larger than 40 terabytes. To help you observe regulatory compliance issues, Glacier uses secure protocols for data transfer and encrypts all data on the server side using key management and 256-bit encryption.

Pricing is extremely low and simple to calculate. Data stored in Glacier is $0.01 per gigabyte. Upload and retrieval operations run only $0.05 per 1000 requests, and there is a pro-rated charge of $0.03 per gigabyte if you delete objects prior to 90 days of storage. Like everything else in AWS, Glacier is a powerful solution that provides highly customizable functionality for which you only pay for what you use. This service is definitely worth a very close look.

-Caleb Carter, Solutions Architect

Facebooktwittergoogle_pluslinkedinmailrss