1-888-317-7920 info@2ndwatch.com

A Refresher Course on Disaster Recovery with AWS

IT infrastructure is the hardware, network, services and software required for enterprise IT. It is the foundation that enables organizations to deliver IT services to their users. Disaster recovery (DR) is preparing for and recovering from natural and people-related disasters that impact IT infrastructure for critical business functions. Natural disasters include earthquakes, fires, etc. People-related disasters include human error, terrorism, etc. Business continuity differs from DR as it involves keeping all aspects of the organization functioning, not just IT infrastructure.

When planning for DR, companies must establish a recovery time objective (RTO) and recovery point objective (RPO) for each critical IT service. RTO is the acceptable amount of time in which an IT service must be restored. RPO is the acceptable amount of data loss measured in time. Companies establish both RTOs and RPOs to mitigate financial and other types of loss to the business. Companies then design and implement DR plans to effectively and efficiently recover the IT infrastructure necessary to run critical business functions.

For companies with corporate datacenters, the traditional approach to DR involves duplicating IT infrastructure at a secondary location to ensure available capacity in a disaster. The key downside is IT infrastructure must be bought, installed and maintained in advance to address anticipated capacity requirements. This often causes IT infrastructure in the secondary location to be over-procured and under-utilized. In contrast, Amazon Web Services (AWS) provides companies with access to enterprise-grade IT infrastructure that can be scaled up or down for DR as needed.

The four most common DR architectures on AWS are:

  • Backup and Restore ($) – Companies can use their current backup software to replicate data into AWS. Companies use Amazon S3 for short-term archiving and Amazon Glacier for long-term archiving. In the event of a      disaster, data can be made available on AWS infrastructure or restored from the cloud back onto an on-premise server.
  • Pilot Light ($$) – While backup and restore are focused on data, pilot light includes applications. Companies only provision core infrastructure needed for critical applications. When disaster strikes, Amazon Machine Images (AMIs) and other automation services are used to quickly provision the remaining environment for production.
  • Warm Standby ($$$) – Taking the Pilot Light model one step further, warm standby creates an active/passive cluster. The minimum amount of capacity is provisioned in AWS. When needed, the environment rapidly scales up to meet full production demands. Companies receive (near) 100% uptime and (near) no downtime.
  • Hot Standby ($$$$) – Hot standby is an active/active cluster with both cloud and on-premise components to it. Using weighted DNS load-balancing, IT determines how much application traffic to process in-house and on AWS.      If a disaster or spike in load occurs, more or all of it can be routed to AWS with auto-scaling.

In a non-disaster environment, warm standby DR is not scaled for full production, but is fully functional. To help adsorb/justify cost, companies can use the DR site for non-production work, such as quality assurance, ing, etc. For hot standby DR, cost is determined by how much production traffic is handled by AWS in normal operation. In the recovery phase, companies only pay for what they use in addition and for the duration the DR site is at full scale. In hot standby, companies can further reduce the costs of their “always on” AWS servers with Reserved Instances (RIs).

Smart companies know disaster is not a matter of if, but when. According to a study done by the University of Oregon, every dollar spent on hazard mitigation, including DR, saves companies four dollars in recovery and response costs. In addition to cost savings, smart companies also view DR as critical to their survival. For example, 51% of companies that experienced a major data loss closed within two years (Source: Gartner), and 44% of companies that experienced a major fire never re-opened (Source: EBM). Again, disaster is not a ready of if, but when. Be ready.

-Josh Lowry, General Manager – West

Facebooktwittergoogle_pluslinkedinmailrss

Backups – Don’t Do It

In migrating customers to AWS one of the consistent questions we are asked is, “How do we extend our backup services into the Cloud?”  My answer?  You don’t.  This is often met with incredulous stares where the customer is wondering if I’m joking, crazy, or I just don’t understand IT.  After all, backups are fundamental to data centers as well as IT systems in general, so why on Earth would I tell someone not to backup their systems?

The short answer to backups is just not to do it, honestly.  The more in depth answer is, of course, more complicated than that.  To be clear, I am talking about system backups; those backups typically used for bare metal restores.  Backups of databases, of file services – these we’ll tackle separately.  For the bulk of systems, however, we’ll leave backups as a relic of on premise data centers.

How?  Why?  Consider a typical three tiered architecture: web servers, application servers, and database servers.  In AWS, ideally your application and web servers are stateless, auto scaled systems.  With that in mind, why would you ever want to spend time, money, or resources on backing up and restoring one of these systems?  The design should be set so if and when a system fails, the health check/monitoring automatically terminates the instance, which in turn automatically creates an auto scale event to launch a new instance in its place.  No painfully long hours working through a restore process.

Similarly, your database systems can work without large scale backup systems.  Yes, by all means run database backups!  Database backups are not for server instance failures but for database application corruption or updates/upgrade rollbacks. Unfortunately, the Cloud doesn’t magically make your databases any more immune to human error.  For the database servers (assuming non-RDS), however, maintaining a snapshot of the server instance is likely good enough for backups.  If and when the database server fails, the instance can be terminated and the standby system can become the live system to maintain system integrity.  Launch a new database server based on the snapshot, restore the database and/or configure replication from the live system, depending on database technology, and you’re live.

So yes, in a properly configured AWS environment, the backup and restore you love to loathe from your on premise environment is a thing of the past.

-Keith Homewood, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

Managing Your Amazon Cloud Deployment to Save Money

Wired Innovation Insights published a blog article written by our own Chris Nolan yesterday. Chris discusses ways you can save money on your AWS cloud deployment in “How to Manage Your Amazon Cloud Deployment to Save Money.” Chris’ top tips incude:

  1. Use CloudFormation or other configuration and orchestration  tool.
  2. Watch out for cloud sprawl.
  3. Use AWS auto scaling.
  4. Turn the lights off when you leave the room.
  5. Use tools to monitor spend.
  6. Build in redundancy.
  7. Planning saves money.

Read the Full Article

Facebooktwittergoogle_pluslinkedinmailrss

Increasing Your Cloud Footprint

The jump to the cloud can be a scary proposition.  For an enterprise with systems deeply embedded in traditional infrastructure like back office computer rooms and datacenters the move to the cloud can be daunting. The thought of having all of your data in someone else’s hands can make some IT admins cringe.  However, once you start looking into cloud technologies you start seeing some of the great benefits, especially with providers like Amazon Web Services (AWS).  The cloud can be cost-effective, elastic and scalable, flexible, and secure.  That same IT admin cringing at the thought of their data in someone else’s hands may finally realize that AWS is a bit more secure than a computer rack sitting under an employee’s desk in a remote office.  Once the decision is finally made to “try out” the cloud, the planning phase can begin.

Most of the time the biggest question is, “How do we start with the cloud?”  The answer is to use a phased approach.  By picking applications and workloads that are less mission critical, you can try the newest cloud technologies with less risk.  When deciding which workloads to move, you should ask yourself the following questions; Is there a business need for moving this workload to the cloud?  Is the technology a natural fit for the cloud?  What impact will this have on the business? If all those questions are suitably answered, your workloads will be successful in the cloud.

One great place to start is with archiving and backups.  These types of workloads are important, but the data you’re dealing with is likely just a copy of data you already have, so it is considerably less risky.  The easiest way to start with archives and backups is to try out S3 and Glacier.  Many of today’s backup utilities you may already be using, like Symantec Netbackup  and Veeam Backup & Replication, have cloud versions that can directly backup to AWS. This allows you to use start using the cloud without changing much of your embedded backup processes.  By moving less critical workloads you are taking the first steps in increasing your cloud footprint.

Now that you have moved your backups to AWS using S3 and Glacier, what’s next?  The next logical step would be to try some of the other services AWS offers.  Another workload that can often be moved to the cloud is Disaster Recovery.   DR is an area that will allow you to more AWS services like VPC, EC2, EBS, RDS, Route53 and ELBs.  DR is a perfect way to increase your cloud footprint because it will allow you to construct your current environment, which you should already be very familiar with, in the cloud.  A Pilot Light DR solution is one type of DR solution commonly seen in AWS.  In the Pilot Light scenario the DR site has minimal systems and resources with the core elements already configured to enable rapid recovery once a disaster happens.  To build a Pilot Light DR solution you would create the AWS network infrastructure (VPC), deploy the core AWS building blocks needed for the minimal Pilot Light configuration (EC2, EBS, RDS, and ELBs), and determine the process for recovery (Route53).  When it is time for recovery all the other components can be quickly provisioned to give you a fully working environment. By moving DR to the cloud you’ve increased your cloud footprint even more and are on your way to cloud domination!

The next logical step is to move Test and Dev environments into the cloud. Here you can get creative with the way you use the AWS technologies.  When building systems on AWS make sure to follow the Architecting Best Practices: Designing for failure means nothing will fail, decouple your components, take advantage of elasticity, build security into every layer, think parallel, and don’t fear constraints! Start with proof-of-concept (POC) to the development environment, and use AWS reference architecture to aid in the learning and planning process.  Next your legacy application in the new environment and migrate data.  The POC is not complete until you validate that it works and performance is to your expectations.  Once you get to this point, you can reevaluate the build and optimize it to exact specifications needed. Finally, you’re one step closer to deploying actual production workloads to the cloud!

Production workloads are obviously the most important, but with the phased approach you’ve taken to increase your cloud footprint, it’s not that far of a jump from the other workloads you now have running in AWS.   Some of the important things to remember to be successful with AWS include being aware of the rapid pace of the technology (this includes improved services and price drops), that security is your responsibility as well as Amazon’s, and that there isn’t a one-size-fits-all solution.  Lastly, all workloads you implement in the cloud should still have stringent security and comprehensive monitoring as you would on any of your on-premises systems.

Overall, a phased approach is a great way to start using AWS.  Start with simple services and traditional workloads that have a natural fit for AWS (e.g. backups and archiving).  Next, start to explore other AWS services by building out environments that are familiar to you (e.g. DR). Finally, experiment with POCs and the entire gambit of AWS to benefit for more efficient production operations.  Like many new technologies it takes time for adoption. By increasing your cloud footprint over time you can set expectations for cloud technologies in your enterprise and make it a more comfortable proposition for all.

-Derek Baltazar, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss

How secure is secure?

There have been numerous articles, blogs, and whitepapers about the security of the Cloud as a business solution.  Amazon Web Services has a site devoted to extolling their security virtues and there are several sites that devote themselves entirely to the ins and outs of AWS security.  So rather than try to tell you about each and every security feature of AWS and try to convince you how secure the environment can be, my goal is to share a real world example of security that can be improved by moving from on premise datacenters to AWS.

Many AWS implementations are used for hosting web applications, most of which are Internet accessible.  Obviously, if your environment is for internal use only you can lock down security even further, but for the interest of this exercise, we’re assuming Internet facing web applications.  The inherent risk, of course, with any Internet accessible application is that accessibility to the Internet provides hackers and malicious users access to your environment as well as honest yet malware/virus/Trojan infected users.

As with on premise and colocation based web farms, AWS offers the standard security practices of isolating customers from one another so that if one customer experiences a security breach, all other customers remain secure.  And of course, AWS Security Groups function like traditional firewalls, allowing traffic only through allowed ports to/from specific destinations/sources.  AWS moves ahead of traditional datacenters starting with Security Groups and Network ACL’s by offering more flexibility to respond to attacks.  Consider the case of a web farm that has components suspected of being compromised; AWS Security Groups can be created in seconds to isolate the suspected components from the rest of the network.  In a traditional datacenter environment, those components may require making complex network changes to move them to isolated networks in order to prevent infection to spread over the network, something AWS blocks by default.

AWS often talks about scalability – able to grow and shrink the environment to meet demands.  That capability also extends to security features as well!  Need another firewall, just add another Security Group, no need to install another device.  Adding another subnet, VPN, firewall, all of these things are done in minutes with no action from on premise staff required.  No more waiting while network cables are moved, hardware is installed or devices are physically reconfigured when you need security updates.

Finally, no matter how secure an environment, no security plan is complete without a remediation plan.  AWS has tools that provide remediation with little to no downtime.  Part of standard practices for AWS environments is to take regular snapshots of EC2 instances (servers).  These snapshots can be used to re-create a compromised or non-functional component in minutes rather than the lengthy restore process for a traditional server.  Additionally, 2nd Watch recommends taking an initial image of each component so that in the event of a failure, there is a fall back point to a known good configuration.

So how secure is secure?  With the ability to respond faster, scale as necessary, and recover in minutes – the Amazon Cloud is pretty darn secure!  And of course, this is only the tip of the iceberg for AWS Cloud Security, more to follow the rest of December here on our blog and please check out the official link above for Amazon’s Security Center and Whitepapers.

-Keith Homewood, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss