1-888-317-7920 info@2ndwatch.com

The next frontier: recovering in the cloud

The pervasive technology industry has created the cloud and all the acronyms that go with it.   Growth is fun, and the cloud is the talk of the town. From the California Sun to the Kentucky coal mines we are going to the cloud, although Janis Joplin may have been there before her time.  Focus and clarity will come later.

There is so much data being stored today that the biggest challenge is going to be how to quantify it, store it, access it and recover it. Cloud-based disaster recovery has broad-based appeal across industry and segment size.  Using a service from the AWS cloud enables more efficient disaster recovery of mission critical applications without any upfront cost or commitment.   AWS allows customers to provision virtual private clouds using its infrastructure, which offers complete network isolation and security. The cloud can be used to configure a “pilot-light” architecture, which dramatically reduces cost over traditional data centers where the concept of “pilot” or “warm” is not an option – you pay for continual use of your infrastructure whether it’s used or not. With AWS, you only use what you pay for, and you have complete control of your data and its security.

Backing data up is relatively simple: select an object to be backed up and click a button. More often than not, the encrypted data reaches its destination, whether in a local storage device or to an S3 bucket in an AWS region in Ireland.  Restoring the data has always been a perpetual challenge. What the cloud does is make ing of the backup capabilities more flexible and more cost effective.  As the cost of cloud-based ing falls rapidly, from thousands of dollars or dinars, to hundreds, it results in more ing, and therefore, more success after a failure whether it’s from a superstore or superstorm, or even a supermodel one.

-Nick Desai, Solutions Architect

Facebooktwittergoogle_pluslinkedinmailrss

Disaster Recovery on Amazon Web Services

After a seven-year career at Cisco, I am thrilled to be working with 2nd Watch to help make cloud work for our customers.  Disaster recovery is a great use case for companies looking for a quick cloud win.  Traditional on-premise backup and disaster recovery technologies are complex and can require major capital investments.  I have seen many emails from IT management to senior management explaining the risk to their business if they do not spend money on backup/DR.  The associated costs with on-premise solutions for DR are often put off to next year’s budget and seem to always get cut midway throughout the year.  When the business does invest in backup/DR, there are countless corners cut in order to maximize the reliability and performance with a limited budget.

The public cloud is a great resource for addressing the requirements of backup and disaster recovery.  Organizations can avoid the sunk costs of data center infrastructure (power, cooling, flooring, etc.) while having all of the performance and resources available for their growing needs.  Atlanta based What’s Up Interactive saved over $1 million with their AWS DR solution (case study here).

I will highlight a few of the top benefits any company can expect when leveraging AWS for their disaster recovery project.

1. Eliminate costly tape backups that require transporting, storing, and retrieving the tape media.  This is replaced by fast disk-based storage that provides the performance needed to run your mission-critical applications.

2.  AWS provides an elastic solution that can scale to the growth of data or application requirements without the costly capital expenses of traditional technology vendors.   Companies can also expire and delete archived data according to organizational policy.  This allows companies to pay for only what they use.

3.  AWS provides a secure platform that helps companies meet compliance requirements due to easy access to data for deadlines that is secure and durable.

Every day we help new customers leverage the cloud to support their business.  Our team of Solutions Architects and Cloud Engineers can assist you in creating a plan to reduce risk in your current backup/DR solution.  Let us know how we can help you get started in your journey to the cloud.

-Matt Whitney, Cloud Sales Executive

Facebooktwittergoogle_pluslinkedinmailrss

Highly Available (HA) vs. Highly Reliable (HR)

The other day I was working with my neighbor’s kid on his baseball fundamentals and I found myself repeating the phrase “Remember the Basics.”  Over the course of the afternoon we played catch, worked on swinging the bat, and fielded grounders until our hands hurt. As the afternoon slipped into the evening hours, I started to see that baseball and my business have several similarities.

My business is Cloud Computing, and my company, 2nd Watch, is working to pioneer Cloud Adoption with Enterprise businesses.  As we discover new ways of integrating systems, performing workloads, and running applications, it is important for us not to forget the fundamentals. One of the most basic elements of this is using the proper terminology. I’ve found that in many cases my customers, partners, and even my colleagues can have different meanings for many of the same terms. One example that comes up frequently is the difference between having a Highly Available infrastructure vs. Highly Reliable infrastructure. I want to bring special attention to these two terms and help to clearly define their meaning.

High Availability (HA) is based on designing and implementing systems that are proactively created to handle the operational capacity to meet their required performance. For example, within Cloud Computing we leverage tools like Elastic Load Balancing and Auto Scaling to automate the scaling of infrastructure to handle the variable demand for web sites. As traffic increases, servers are spun up to handle the load and vice versa as it decreases.  If a user cannot access a website or it is deemed “unavailable,” then you risk the potential loss of readership, sales, or the attrition of customers.

On the other hand, Highly Reliable (HR) systems have to do with your Disaster Recovery (DR) model and how well you prepare for catastrophic events. In Cloud Computing, we design for failure because anything can happen at any time. Having a proper Disaster Recovery plan in place will enable your business to keep running if problems arise. Any company with sensitive IT operations should look into a proper DR strategy, which will support their company’s ability to be resilient in the event of failure. While a well-planned DR schema may cost you more money upfront, being able to support both your internal and external customers will pay off in spades if an event takes place that requires you to fail over.

In today’s business market it is important to take the assumptions out of our day-to-day conversations and make sure that we’re all on the same page. The difference between being Highly Available and Highly Reliable systems is a great example of this. By simply going back to the fundamentals, we can easily make sure that our expectations are met and our colleagues, partners, and clients understand both our spoken & written words.

-Blake Diers, Cloud Sales Executive

Facebooktwittergoogle_pluslinkedinmailrss

Amazon S3: Backup Your Laptop for Less

For quite some time I’ve been meaning to tinker around with using Amazon S3 for a backup tool.  Sure I’ve been using S3 backed Dropbox for years now and love it, and there are a multitude of other desktop client apps out there that do the same sort of thing with varying price points and feature sets (including Amazon’s own cloud drive).  The primary reason I wanted to look into using something specific to S3 is because it is economical and very highly available and secure, but it also scales well in a more enterprise setting.  It is just a logical and compelling choice if you are already running IAAS in AWS.

If you’re unfamiliar with rsync, it is a UNIX tool for copying files or sets of files with many cool features. Probably the most distinctive feature is that it does differential copying, which means that it will only copy files that have changed on the source.  This means if you have a file set containing thousands of files that you want to sync between the source and the destination it will only have to copy the files that have changed since the last copy/sync.

Being an engineer my initial thought was, “Hey, why not just write a little python program using the boto AWS API libs and librsync to do it?”, but I am also kind of lazy, and I know I’m not that forward-thinking, so I figured someone has probably already done this.  I consulted the Google machine and sure enough… 20 seconds later I had discovered Duplicity (http://duplicity.nongnu.org/).   Duplicity is an open source GPL python based application that does exactly what I was aiming for – it allows you to rsync file to an S3 bucket.  In fact, it even has some additional functionality like encryption and passwords protecting the data.

A little background info on AWS storage/backups

Tying in to my earlier point about wanting to use S3 for EC2 Linux instances, traditional Linux AWS EC2 instance backups are achieved using EBS snapshots. This can work fairly well but has a number of limitations and potential pitfalls/shortcomings.

Here is a list of advantages and disadvantages of using EBS snapshots for Linux EC2 instance backup purposes. In no way are these lists fully comprehensive:

Advantages:

  • Fast
  • Easy/Simple
  • Easily scriptable using API tools
  • Pre-backed functionality built into the AWS APIs and Management Console

Disadvantages:

  • Non-selective (requires backing up an entire EBS volume)
  • More expensive
  • EBS is more expensive than S3
    • Backing up an entire EBS volume can be overkill for what you actually need backed up and result in a lot of extra cost for backing up non-essential data
  • Pitfalls with multiple EBS volume software RAID or LVM sets
    • Multiple EBS volume sets are difficult to snapshot synchronously
    • Using the snapshots for recovery requires significant work to reconstruct volume sets
  • No ability to capture only files that have changed since previous backup (ie rsync style backups)
  • Only works on EBS back instances

Compare that to a list of advantages/disadvantages of using the S3/Duplicity solution:

Advantages:

  • Inexpensive (S3 is cheap)
  • Data security (redundancy and geographically distributed)
  • Works on any Linux system that has connectivity to S3
  • Should work on any UNIX style OS (includes Mac OSX) as well
  • Only copies the deltas in the files and not the entire file or file-set
  • Supports “Full” and “Incremental” backups
  • Data is compressed with gzip
  • Lightweight
  • FOSS (Free and Open Source Software)
  • Works independently of underlying storage type (SAN, Linux MD, LVM, NFS, etc.) or server type (EC2, Physical hardware, VMWare, etc.)
  • Relatively easy to set up and configure
  • Uses syntax that is congruent with rsync (e.g. –include, –exclude)
  • Can be restored anywhere, anytime, and on any system with S3 access and Duplicity installed

Disadvantages:

  • Slower than a snapshot, which is virtually instantaneous
  • Not ideal for backing up data sets with large deltas between backups
  • No out-of-the-box type of AWS API or Management Console integration (though this is not really necessary)
  • No “commercial” support

On to the important stuff!  How to actually get this thing up and running

Things you’ll need:

  • The Duplicity application (should be installable via either yum, apt, or other pkg manager). Duplicity itself has numerous dependencies but the package management utility should handle all of that.
  • An Amazon AWS account
  • Your Amazon S3 Access Key ID
  • Your Amazon S3 Secret Access Key
  • A list of files/directories you want to back up
  • A globally unique name for an Amazon S3 bucket (the bucket will be created if it doesn’t yet exist)
  • If you want to encrypt the data:
    • A GPG key
    • The corresponding GPG key passphrase
  1. Obtain/install the application (and its pre-requisites):

If you’re running a standard Linux distro you can most likely install it from a ‘yum’ or ‘apt’ repository (depending on distribution).  Try something like “sudo yum install duplicity” or “sudo apt-get install duplicity”.  If all else fails, (perhaps you are running some esoteric Linux distro like Gentoo?) you can always do it the old-fashioned way and download the tarball from the website and compile it (that is outside of the scope of this blog).  “Use the source Luke.”  If you are a Mac user you can also compile it and run it on Mac OSX (http://blog.oak-tree.us/index.php/2009/10/07/duplicity-mac), which I have not ed/verified actually works.

  • NOTE: On Fedora Core 18, Duplicity was already installed and worked right out of the box.  On a Debian Wheezy box I had to apt-get install duplicity and python-boto.  YMMV
  1. Generate a GPG key if you don’t already have one:
  • If you need to create a GPG key use ‘gpg –gen-key’ to create a key with a passphrase. The default values supplied by ‘gpg’ are fine.
  • NOTE: record the GPG Key value that it generates because you will need that!
  • NOTE: keep a backup copy of your GPG key somewhere safe.  Without it you won’t be able to decrypt your backups, and that could make restoration a bit difficult.
  1. Run Duplicity backing up whatever files/directories you want saved on the cloud.  I’d recommend reading the main page for a full rundown on all the options and syntax.

I used something like this:

$ export AWS_ACCESS_KEY_ID=’AKBLAHBLAHBLAHMYACCESSKEY’

$ export AWS_SECRET_ACCESS_KEY=’99BIGLONGSECRETKEYGOESHEREBLAHBLAH99′

$ export PASSPHRASE=’mygpgpassphrase’

$ duplicity incremental –full-if-older-than 1W –s3-use-new-style –encrypt-key=MY_GPG_KEY –sign-key=MY_GPG_KEY –volsize=10 –include=/home/rkennedy/bin –include=/home/rkennedy/code –include=/home/rkennedy/Documents –exclude=** /home/rkennedy s3+http://myS3backup-bucket

 

  1. Since we are talking about backups and rsync this is probably something that you will want to run more than once.  Writing a bash script or something along those lines and kicking it off automatically with cron seems like something a person may want to do.  Here is a pretty nice example of how you could script this – http://www.cenolan.com/2008/12/how-to-incremental-daily-backups-amazon-s3-duplicity/

 

  1. Recovery is also pretty straightforward:

$ duplicity  –encrypt-key=MY_GPG_KEY –sign-key=MY_GPG_KEY –file-to-restore Documents/secret_to_life.docx –time 05-25-2013 s3+http://myS3backup-bucket /home/rkennedy/restore

Overwhelmed or confused by all of this command line stuff?  If so, Deja-dup might be helpful.  It is a Gnome based GUI application that can perform the same functionality as Duplicity (turns out the two projects actually share a lot of code and are worked on by some of the same developers).  Here is a handy guide on using Deja-dup for making Linux backups: (http://www.makeuseof.com/tag/dj-dup-perfect-linux-backup-software/)

This is pretty useful, and for $4 a month, or about the average price of a latte, you can store nearly 50GB compressed of de-duped backups in S3 (standard tier). For just a nickel you can get at least 526MB of S3 backup for a month.  Well, that and the 5GB of S3 you get for free.

-Ryan Kennedy, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss

Get Yourself the Right Cloud Provider SLA

Making sure your cloud provider is giving you what you need is based on a Service Level Agreement (SLA). This wasn’t so difficult in years past because off-site computing was usually about non-critical functions like archive storage, web serving, and sometimes about “dark” disaster recovery infrastructure. Keeping mission-critical infrastructure and applications in-house made for an easier time measuring service levels because you had full visibility across your entire infrastructure, knew your staff capabilities, your budget, compliance needs, and your limitations. But the cloud is allowing companies to save significant money by moving more and more IT functionality into the cloud. When that happens, SLAs not only change, they become very important, often critically so.

A cloud SLA can’t be accepted as “boiler plate” from the customer’s perspective. You need to carefully analyze what your provider is offering, and you need to ensure that it’s specific and measureable from your side. Not every provider can give you a customized SLA, especially the largest providers; but most, including AWS, can give you augmented SLAs via partners that can be more easily bent to your needs. If you’ve analyzed your provider’s standard SLA and it’s not cutting the mustard, then working with a partner is really your best option.

The most common criteria in an SLA is downtime. Most providers will offer “five 9s” in this regard, or 99.999% uptime, though often this is for cloud services, not necessarily cloud infrastructure. That’s because cloud infrastructure downtime is subject to more factors than a service. In a service model, the provider knows they’re responsible for all aspects of delivery; so similar to an internal SLA, they have full visibility over their own infrastructure, software, datacenter locations and so on. But when customers move infrastructure into the cloud, there are two sides of possible downtime – yours and theirs. Virtual networks may crash because one of your network administrators goofed, not necessarily the provider. Those issues need to be resolved before help can be provided and systems restored. It’s very important that not only this situation be covered in your SLA, but also the steps that will be taken by both sides to resolve the issue. A weak SLA here gives the provider too much leeway to push back or delay. And on the flip side, your IT staff needs to have clear steps in place as well as time-to-resolve metrics in place so they aren’t the resolution blocker either.

Another important concern, and often an unnecessary blocker to the benefits of cloud computing, is making sure your applications are properly managed so they can comply with regulatory requirements specific to your business. You and your customers can  feel safe putting data into the cloud, and compliance audits won’t give your architecture unnecessary audit problems. We’ve seen customers who thought this was a show stopper when considering cloud adoption, when really it just takes some planning and foresight.

Last, take a long look at your business processes. Aside from cost savings, what impact is the cloud having on the way you do business? What will be the impact if it fails? Are you dead in the water or are there backup processes in place? Answering these questions will effectively provide you with two cloud SLAs: The providers and your own. The two need to be completely in sync both to ensure your business as well as making your cloud adoption a success and keeping the door open to new opportunities to leverage cloud computing.

As I mentioned earlier, a good way to do this is to work with one of the larger cloud providers’ value-add partners. With AWS, for example, you’re able to work with a certified partner like 2nd Watch to purchase tiered SLAs based on your needs that build off the SLAs offered by AWS. For example, 2nd Watch is the first partner to offer 99.99% uptime on top of AWS’ uptime SLA for all enterprise applications. We also offer both technical incidence response and Managed Services, which takes managing your applications off your plate, works to analyze possible technical problems before they happened, and will help ensure your application adheres to compliance regulations. For any of these offerings, customers can opt for Select, Executive, or Premier SLAs. Each of these has their own market leading uptime, problem response, and management service agreements so you can tailor an SLA based on your needs and budget.

-Jeff Aden, President

Facebooktwittergoogle_pluslinkedinmailrss