1-888-317-7920 info@2ndwatch.com

DR is Dead

Having been in the IT Industry since the 90s I’ve seen many iterations on Disaster Recovery principals and methodologies.  The concept of DR of course far exceeds my tenure in the field as the idea started coming about in the 1970s as businesses began to realize their dependence on information systems and the criticality of those services.

Over the past decade or so we’ve really seen the concept of running a DR site at a colo facility (either leased or owned) become a popular way for organizations to have a rapidly available disaster recovery option.  The problem with a colo facility is that it is EXPENSIVE!  In addition to potentially huge CapEx (if you are buying your own infrastructure), you have the facility and infrastructure OpEx and all the overhead expense of managing those systems and everything that comes along with that.  In steps the cloud… AWS and the other players in the public cloud arena provide you the ability to run a DR site without having really any CapEx.  Now you are only paying for the virtual infrastructure that you are actually using as an operational cost.

An intelligently designed DR solution could leverage something like Amazon’s Pilot Light to keep your costs reduced by running the absolute minimal core infrastructure needed to keep the DR site fully ready to scale up to production.  Well that is a big improvement over purchasing millions of dollars of hardware and having thousands and thousands of dollars in OpEx and overhead costs every month.  Even still… there is a better way.  If you architect your infrastructure and applications following the AWS best practices, then in a perfect world there is really no reason to have DR at all.  By architecting your systems to balance across multiple AWS regions and availability zones; correctly designing architecture and applications for handling unpredictable and cascading failure; and to automatically and elastically scale to meet increases and decreases in demand you can effectively eliminate the need for DR.  Your data and infrastructure are distributed in a way that is highly available and impervious to failure or spikes/drops in demand.  So in addition to inherent DR, you are getting HA and true capacity-on-demand.  The whole concept of a disaster taking down a data center and the subsequent effects on your systems, applications, and users becomes irrelevant.  It may take a bit of work to design (or redesign) an application to this new cloud geo-distributed model, but I assure you that from a business continuity perspective, reduced TCO, scalability, and uptime it will pay off in spades.

That ought to put the proverbially nail in the coffin. RIP.

-Ryan Kennedy, Senior Cloud Engineer


Elasticity: Matching Capacity to Demand

According to IDC, a typical server utilizes an average of 15% of its capacity. That means 85% of a company’s capital investment can be categorized as waste. While virtualization can increase server capacity to as high as 80%, the company is still faced with 20% waste under the best case scenario. The situation gets worse when companies have to forecast demand for specific periods; e.g., the holiday season in December. If they buy too much capacity, they overspend and create waste. If they buy too little, they create customer experience and satisfaction issues.

The elasticity of Amazon Web Services (AWS) removes the need to forecast demand and buy capacity up-front. Companies can scale their infrastructure up and down as needed to match capacity to demand. Common use cases include: a) fast growth (new projects, startups); b) on and off (occasionally need capacity); c) predictable peaks (specific demand at specific times); and d) unpredictable peaks (demand exceeds capacity). Use the elasticity of AWS to eliminate waste and reduce costs over traditional IT, while providing better experiences and performance to users.

-Josh Lowry, General Manager Western U.S.



99.99% Uptime Guaranteed with SLA

2nd Watch has launched its new and expanded managed service offering to guarantee 99.99 percent AWS uptime and to give customers increased control over their Service Level Agreement (SLA) options. Now, three distinct service levels — Select, Executive and Premier — allow customers to tailor 2nd Watch products and services to their specific enterprise needs.

Visit our Managed Services page for details.


Extend your Datacenter into the AWS Cloud

Using the Cloud seems like a no-brainer these days for new development or new projects.  Green field is a great and easy place to start when working with the Cloud.

What if I have a datacenter?  Servers?  Existing infrastructure?  How do I leverage the Cloud?

At 2nd Watch one of our core compentencies is helping businesses leverage their existing investment in infrastructure with the new economies of Cloud Computing.  Recently I co-hosted a webinar with AWS to talk about how you can leverage AWS to extend your existing datacenter(s) for various reasons: extended capacity, batch processing, one-time heavy workloads, marketing websites, etc., etc.

Below are the slides from the presentation: Or you can listen to a recording of the live webinar here.

Or you can listen to a recording of the live webinar.



Security On the Cloud

As more and more companies migrate their IT infrastructure to the cloud the main cloud-related concerns for businesses continue to be security, data control, and reliability. There are several factors to consider with any technological advancement. Most of these cloud-related concerns are not new and, with well-planned risk management, can be avoided to ensure data is both available and protected.

An ISACA Emerging Technology White Paper notes some common risk factors and solutions businesses should consider when making the move to the cloud.

  • Enterprises need to be particular in choosing a provider. Reputation, history and sustainability should all be factors to consider.
  • The dynamic nature of cloud computing may result in confusion as to where information actually resides. When information retrieval is required, this may create delays.
  • Public clouds allow high-availability systems to be developed at service levels often impossible to create in private networks. The downside to this availability is the potential for commingling of information assets with other cloud customers, including competitors.

Companies should have a risk management program that is able to deal with continuously evolving risks. An experienced provider can deliver useful strategies for mitigating these risks. For example, requirements for disaster recovery should be communicated between the business and the provider. Having a detailed Service Level Agreement will help the company manage its data once it migrates to the cloud as well as outline expectations regarding the handling, usage, storage and availability of information. Companies should also consider their security and management options when choosing a public, private or hybrid cloud. What are the pros and cons of each?

Public Cloud

  • Pros: Because infrastructure is maintained outside of the organization , public clouds offer the grea level of cost savings and efficiency – provides ability to add capacity as needed.  The public cloud has commoditized traditional technology infrastructure.
  • Cons: You share this cloud infrastructure with other users, potentially including competitors. Consider the sensitivity of the data to be stored on a public cloud and use encryption where required to protect corporate assets

Private Cloud

  • Pros: Because infrastructure is maintained on a private network, private clouds offer the grea level of security and control. You own not only the data but the cloud that houses it too.
  • Con: Provides lower cost savings than a public cloud, and the infrastructure lifecycle has to be managed.

Hybrid Cloud

  • Pros: Includes a mix of public and private storage and server infrastructure. Different parts of your business data can be stored on different clouds, ensuring high security or efficiency where needed.
  • Con: You have to keep track of multiple platforms and ensure all parts can communicate to each other.

By keeping these factors in mind you can ensure a smooth and successful transition to the cloud with secure and easy access to your data.