1-888-317-7920 info@2ndwatch.com

Managed Services for Cloud Infrastructure

Managed Services is the practice of outsourcing day-to-day management responsibilities of cloud infrastructure as a strategic method for improving IT operations and reducing expenses. Managed Services removes the customer’s burden of nuts-and-bolts, undifferentiated work and enables them to focus more on value creating activities that directly impact their core business. In accordance with its expertise in Amazon Web Services (AWS) and enterprise IT operations, 2nd Watch has built a best-in-class Managed Services practice with the following capabilities.

Management
* Escalation Management – Collaboration between MS and professional services on problem resolution.
* Incident Management – Resolution of day-to-day tickets with response times based on severity.
* Problem Management – Root cause analysis and resolution for recurring incidents.

Monitoring
* Alarming Services – CPU utilization, data backup, space availability thresholds, etc.
* Reporting Services – Cost optimization, incidents, SLA, etc.
* System Reliability – 24×7 monitoring at a world-class Network Operations Center (NOC).

Other
* Audit Support – Data pulls, process documents, staff interviews, etc.
* Change Management – Software deployment, patching, ing, etc.
* Service-Level Agreement – Availability and uptime guarantees, including 99.9% and 99.99%.

2nd Watch’s Managed Services practice provides a single support organization for customers running their cloud infrastructure on AWS. Customers benefit from improved IT operations and reduced expenses. Customers also benefit from Managed Services identifying problems before they become critical business issues impacting availability and uptime. Costs generally involve a setup or transition fee and an ongoing, fixed monthly fee making support costs predictable. Shift your focus to valuation creation and let Managed Services do the undifferentiated heavy lifting.

All contents copyright © 2013, Josh Lowry. All rights reserved.

-Josh Lowry, General Manager West

Facebooktwittergoogle_pluslinkedinmailrss

DR Your DB

Databases tend to host the most critical data your business has. From orders, customers, products and even employee information – it’s everything that your business depends on. How much of that can you afford to lose?

With AWS you have options for database recovery depending on your budget and Recovery Time Objective (RTO).

Low budget/Long RTO:

  • Whether you are in the cloud or on premise, using the AWS Command Line Interface (CLI) tools you can script uploads of your database backups directly to S3. This can be added as a step to an existing backup job or an entirely new job.
  • Another option would be to use a third party tool to mount an S3 bucket as a drive. It’s possible to backup directly to the S3 bucket, but if you have write issues you may need to write the backup locally and then move it to the mounted drive.

These methods have a longer RTO as they will require you to stand up a new DB server and then restore the backups, but is a low cost solution to ensure you can recover your business. The catch here is that you can only restore to the last backup that you have taken and copied to S3. You may want to review you backup plans to ensure you are comfortable with what you may lose. Just make sure you use the native S3 lifecycle policies to purge old backups otherwise your storage bill will slowly get out of hand.

High budget/short RTO:

  • Almost all mainstream Relational Database Management Systems (RDBMS) have a native method of replication. You can setup an EC2 Instance database server to replicate your database to. This can be in real-time so that you can be positive that you will not lose a single transaction.
  • What about RDS? While you cannot use native RDBMS replication there are third party replication tools that will do Change Data Capture (CDC) replication directly to RDS. These can be easier to setup than the native replication methods, but you will want to make sure you are monitoring these tools to ensure that you do not get into a situation where you can lose transactional data.

Since this is DR you can lower the cost of these solutions by downsizing the RDS or EC2 instance. This will increase the RTO as you will need to manually resize the instances in the event of failure, but can be a significant cost saver. Both of these solutions will require connectivity to the instance over VPN or Direct Connect.

Another benefit of this solution is that it can easily be utilized for QA, Testing and development needs. You can easily snapshot the RDS or EC2 instance and stand up a new one to work against. When you are done – just terminate it.

With all database DR solutions, make sure you script out the permissions & server configurations. This either needs to be saved off with the backups or applied to RDS/EC2 instances. These are constantly changing and can create recovery issues if you do not account for them.

With an AWS database recovery plan you can avoid losing critical business data.

-Mike Izumi, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

DR is Dead

Having been in the IT Industry since the 90s I’ve seen many iterations on Disaster Recovery principals and methodologies.  The concept of DR of course far exceeds my tenure in the field as the idea started coming about in the 1970s as businesses began to realize their dependence on information systems and the criticality of those services.

Over the past decade or so we’ve really seen the concept of running a DR site at a colo facility (either leased or owned) become a popular way for organizations to have a rapidly available disaster recovery option.  The problem with a colo facility is that it is EXPENSIVE!  In addition to potentially huge CapEx (if you are buying your own infrastructure), you have the facility and infrastructure OpEx and all the overhead expense of managing those systems and everything that comes along with that.  In steps the cloud… AWS and the other players in the public cloud arena provide you the ability to run a DR site without having really any CapEx.  Now you are only paying for the virtual infrastructure that you are actually using as an operational cost.

An intelligently designed DR solution could leverage something like Amazon’s Pilot Light to keep your costs reduced by running the absolute minimal core infrastructure needed to keep the DR site fully ready to scale up to production.  Well that is a big improvement over purchasing millions of dollars of hardware and having thousands and thousands of dollars in OpEx and overhead costs every month.  Even still… there is a better way.  If you architect your infrastructure and applications following the AWS best practices, then in a perfect world there is really no reason to have DR at all.  By architecting your systems to balance across multiple AWS regions and availability zones; correctly designing architecture and applications for handling unpredictable and cascading failure; and to automatically and elastically scale to meet increases and decreases in demand you can effectively eliminate the need for DR.  Your data and infrastructure are distributed in a way that is highly available and impervious to failure or spikes/drops in demand.  So in addition to inherent DR, you are getting HA and true capacity-on-demand.  The whole concept of a disaster taking down a data center and the subsequent effects on your systems, applications, and users becomes irrelevant.  It may take a bit of work to design (or redesign) an application to this new cloud geo-distributed model, but I assure you that from a business continuity perspective, reduced TCO, scalability, and uptime it will pay off in spades.

That ought to put the proverbially nail in the coffin. RIP.

-Ryan Kennedy, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss

Elasticity: Matching Capacity to Demand

According to IDC, a typical server utilizes an average of 15% of its capacity. That means 85% of a company’s capital investment can be categorized as waste. While virtualization can increase server capacity to as high as 80%, the company is still faced with 20% waste under the best case scenario. The situation gets worse when companies have to forecast demand for specific periods; e.g., the holiday season in December. If they buy too much capacity, they overspend and create waste. If they buy too little, they create customer experience and satisfaction issues.

The elasticity of Amazon Web Services (AWS) removes the need to forecast demand and buy capacity up-front. Companies can scale their infrastructure up and down as needed to match capacity to demand. Common use cases include: a) fast growth (new projects, startups); b) on and off (occasionally need capacity); c) predictable peaks (specific demand at specific times); and d) unpredictable peaks (demand exceeds capacity). Use the elasticity of AWS to eliminate waste and reduce costs over traditional IT, while providing better experiences and performance to users.

-Josh Lowry, General Manager Western U.S.

 

Facebooktwittergoogle_pluslinkedinmailrss

99.99% Uptime Guaranteed with SLA

2nd Watch has launched its new and expanded managed service offering to guarantee 99.99 percent AWS uptime and to give customers increased control over their Service Level Agreement (SLA) options. Now, three distinct service levels — Select, Executive and Premier — allow customers to tailor 2nd Watch products and services to their specific enterprise needs.

Visit our Managed Services page for details.

Facebooktwittergoogle_pluslinkedinmailrss