1-888-317-7920 info@2ndwatch.com

How to Choose the Right Hyperscale Managed Service Provider (MSP)

One of the challenges that many businesses struggle to overcome is how to keep up with the massive (and on-going) changes in technology and implement best practices for managing them.  The Public Cloud­—in particular, Hyperscale Cloud providers like AWS—has ushered in a new era of IT technology. This technology changes rapidly and is designed to provide businesses with the building blocks that allow IT organizations to focus on innovation and growth, rather than mess with things that don’t differentiate their business.

A Hyperscale Managed Services Provider (MSP) can help address a very important gap for many businesses that struggle to:

  • Keep up with the frenetic pace of change in Public Cloud
  • Define and use best practices to achieve superior results
  • Manage their infrastructure the most efficient way possible

 

In most cases, Hyperscale MSPs have deep expertise, technology, and automated capabilities to deliver high-quality managed service on a hyperscale platform. And because Hyperscale MSPs are solely focused to deliver capabilities on the cloud IaaS and PaaS that today’s enterprises are using, they are well versed in the best practices and standards to achieve the right results for their clients.

So, how do you go about selecting the right MSP?  The answer to this question is critical because we believe choosing the right MSP is one of the most important decisions you will make when consuming the public cloud.  It is also important to note that some of the qualifications to look for when selecting a Hyperscale MSP for your business needs are obvious, while others are more elusive.  I’ve included a few suggestions below to keep in mind when evaluating and selecting the right Hyperscale MSP.

Expertise on the Platform of Your Choice

First and foremost, no two public cloud providers are the same.  Each provider implements MSP strategies differently—from infrastructure and redundancy, to automation and billing concepts.  Secondly, it isn’t enough for a provider to tell you they have a few applications running on the platform.  When looking to entrust someone with your most valuable assets, expertise is key!  An important KPI for measuring the capabilities of a MSP that many businesses overlook is the provider’s depth and breadth of experience. A qualified Hyperscale MSP will have the right certifications, accreditations, and certified engineer-to-customer ratios.  You may feel good about signing with a large provider because they claim a higher number of certified engineers than the smaller firms, until…you realize their certified engineer-to-customer ratio is out of whack.  Having 200 certified engineers means nothing if you have 5,000+ customers.  At 2nd Watch, we have more certified engineers than we do customers, and we like it that way.

The Focus is on Customer Value

This is an obvious recommendation, but it does have some nuances.  Many MSPs will simply take the “Your mess for less” approach to managing your infrastructure.  Our customers tell us that one of the reasons they chose 2nd Watch was our focus on the things that matter to them.  There are many MSPs that have technical capabilities to manage Cloud infrastructure but not all are able to focus in on how an enterprise wants to use the Public Cloud.  MSPs with the ability to understand their client’s needs and goals tailor their approach to work for the enterprise vs. making them snap to some preconceived notion of how these things should work or function.  Find an MSP that is willing to make the Public Cloud work the way you want it to and your overall experience, and the outcome, will be game changing.

Optimize, Optimize, Optimize

Moving to the Public Cloud is just the first step in the journey to realizing business value and transforming IT.  The Cloud is dynamic in nature, and due to that fact, it is important that you don’t rest on just a migration once you are using it.  New instance types, new services, or just optimizing what you are running today are great ways to ensure your infrastructure is running at top notch.  It is important to make sure your MSP has a strong, ongoing story about optimization and how they will provide it.  At 2nd Watch, we break optimization into 3 categories:  Financial Optimization, Technical Optimization and Operations Optimization.  It is a good idea to ask your MSP how they handle these three facets of optimization and at what cadence.  Keep in mind that some providers’ pricing structures can act as a disincentive for optimization.  For example, if your MSP’s billing structure is based on a percentage of your total cloud spend, and they reduce that bill by 30% through optimization efforts, that means they are now getting paid less, proportionately, and are likely not motivated to do this type of optimization on a regular basis as it hurts their revenue.  Alternatively, we have also seen MSPs charge extra for these types of services, so the key is to make sure you ask if it’s included and get details about the services that would be considered an extra charge.

Full Service

The final qualification to look for in a Hyperscale MSP is whether they are a full-service provider.  Too often, pure play MSPs are not able to provide a full service offering under their umbrella.  The most common reason is that they lack professional services to assess and migrate workloads or cloud architects to build out new functionality.

Our enterprise clients tell us that one of their major frustrations is having to work with multiple vendors on a project.  With multiple vendors, it is difficult to keep track of who is accountable and for what they are accountable.  Why would the vendor that is migrating be motivated to make sure the application is optimized for support if they aren’t providing the support?  I have heard horror stories of businesses trying to move to the cloud and becoming frustrated that multiple vendors are involved on the same workload, because the vendors blame each other for missing deadlines or not delivering key milestones or technical content.  Your business will be better served by hiring an MSP who can run the full cloud-migration process—from workload assessment and migration to managing and optimizing your cloud infrastructure on an ongoing basis.

In addition to the tips I have listed above, 2nd Watch recommends utilizing Gartner’s Magic Quadrant to help evaluate the various public cloud managed service providers available to you. Gartner positioned 2nd Watch in the Leaders quadrant of the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide for our completeness of vision and ability to execute.  You can download and read the full report here.

 

-Kris Bliesner, CTO

 

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document.

Facebooktwittergoogle_pluslinkedinmailrss

Tools: Managing Your AWS Environments

In the article “Increasing your Cloud Footprint” we discussed the phased approach of moving a traditional environment to Amazon Web Services (AWS).  You start with some of the low risk workloads like archiving and backups, move on to workloads that are a more natural fit for the cloud like disaster recovery or development accounts, and finally create POCs for production workloads.  By the time you reach production workloads and work out all the kinks you should be operating full time in the cloud! OK not quite, but you will have the experience and know-how to be comfortable with what works and what doesn’t work for your organization in the cloud.  Once the organization gets comfortable with AWS, it is a natural progression for more and more of the environment to migrate to the cloud. Before you know it, you have many workloads in AWS and might even have several accounts.  The next big question is, what tools are available to manage the environment?

AWS provides users with several tools to manage their cloud environments.   The main tools most people use when getting started are the AWS Console and the AWS CLI.  The AWS console gives the ability to access and manage most AWS services through an intuitive web based interface, while the CLI is a command line based tool you can use to manage services and automate actions with scripts.  For developers, AWS provides SDKs for simplifying using AWS services in applications.  AWS provides an API tailored to work with several programming languages and platforms like Java, .NET, Node.js, PHP, Python, Ruby, Android, and iOS.

Along with the regular tools like the AWS Console, CLI and APIs, AWS provides IDE Toolkits that integrate into your development environment.  Both the AWS Toolkit for Eclipse and the AWS Toolkit for Visual Studio make it easier for developers to develop and deploy application using AWS technologies.

The great thing about the AWS IDE Toolkits is that they are very useful even if you are not a developer.  For example, if you manage multiple accounts mainly through the standard AWS console, tasks like switching  between accounts can become cumbersome and unwieldy.   Either you have to log in and out of each environment through your browser, always checking to make sure you are executing commands in the right environment, or you have to use multiple browsers to separate multiple accounts. Either way the process isn’t optimal.  The AWS Toolkit for Visual Studio (or Eclipse) seems to solve this problem and can be handy for any AWS cloud administrator.   The AWS Toolkit for Visual Studio is compatible with Visual Studio 2010, 2012, and 2013.  To setup a new account you download the AWS Toolkit for visual studio here.  Once installed, you add a user through the AWS explorer Profile section seen here:

Explorer Profile

You can then add an account using a Display Name, Access Key ID, Secret Access Key, and Account number. You can add multiple AWS accounts as long as you have the Access Keys for a user with the ability to access resources.  See the Add Account box below:

Add Account

Once you have the credentials entered for multiple accounts, you will have the ability to manage each account by just pulling down the Account dropdown.  As you can see below I have two accounts “2nd Watch Prod” and “2nd Watch Dev”:

Account Dropdown

Finally, you can manage the resources in the selected account by just dropping down which account you want active and then clicking on the corresponding AWS resource you would like to manage.  In the example below we are looking at the Amazon EC2 Instances for the Ireland region for another account called “2nd Watch SandBox”.  You can quickly click on the Account drop down to select another account and look at the instances associated with it. Suddenly, switching between accounts is manageable and you can focus on being more productive across all your accounts!

Account Dropdown 2

The AWS Toolkit for Visual Studio is an extremely powerful tool.  Not only is it a great tool for integrating your environment for developers, it can also serve as a great way to manage your devices on AWS.  There are many services you can manage with the AWS Toolkits, but be warned, it doesn’t have them all. For example, working with auto-scale groups can be done using the CLI or through the AWS console as there is no AWS Toolkit compatibility yet.  If you are interested in AWS Toolkit for Visual Studio you can see the complete instructions here.

Overall, managing your AWS environment largely depends on how you want to interact with the AWS services.  If you like the GUI feel, the console or AWS Toolkits are a great match.  However, if you like texted based CLI interfaces, the AWS CLI tools and SDKs are a great way to interact with AWS. Lastly, using each tool takes time to learn, but once you find the best one for your specific needs you should experience an increase in productivity that will make life using AWS that much easier.

– Derek Baltazar, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss

A Refresher Course on Disaster Recovery with AWS

IT infrastructure is the hardware, network, services and software required for enterprise IT. It is the foundation that enables organizations to deliver IT services to their users. Disaster recovery (DR) is preparing for and recovering from natural and people-related disasters that impact IT infrastructure for critical business functions. Natural disasters include earthquakes, fires, etc. People-related disasters include human error, terrorism, etc. Business continuity differs from DR as it involves keeping all aspects of the organization functioning, not just IT infrastructure.

When planning for DR, companies must establish a recovery time objective (RTO) and recovery point objective (RPO) for each critical IT service. RTO is the acceptable amount of time in which an IT service must be restored. RPO is the acceptable amount of data loss measured in time. Companies establish both RTOs and RPOs to mitigate financial and other types of loss to the business. Companies then design and implement DR plans to effectively and efficiently recover the IT infrastructure necessary to run critical business functions.

For companies with corporate datacenters, the traditional approach to DR involves duplicating IT infrastructure at a secondary location to ensure available capacity in a disaster. The key downside is IT infrastructure must be bought, installed and maintained in advance to address anticipated capacity requirements. This often causes IT infrastructure in the secondary location to be over-procured and under-utilized. In contrast, Amazon Web Services (AWS) provides companies with access to enterprise-grade IT infrastructure that can be scaled up or down for DR as needed.

The four most common DR architectures on AWS are:

  • Backup and Restore ($) – Companies can use their current backup software to replicate data into AWS. Companies use Amazon S3 for short-term archiving and Amazon Glacier for long-term archiving. In the event of a      disaster, data can be made available on AWS infrastructure or restored from the cloud back onto an on-premise server.
  • Pilot Light ($$) – While backup and restore are focused on data, pilot light includes applications. Companies only provision core infrastructure needed for critical applications. When disaster strikes, Amazon Machine Images (AMIs) and other automation services are used to quickly provision the remaining environment for production.
  • Warm Standby ($$$) – Taking the Pilot Light model one step further, warm standby creates an active/passive cluster. The minimum amount of capacity is provisioned in AWS. When needed, the environment rapidly scales up to meet full production demands. Companies receive (near) 100% uptime and (near) no downtime.
  • Hot Standby ($$$$) – Hot standby is an active/active cluster with both cloud and on-premise components to it. Using weighted DNS load-balancing, IT determines how much application traffic to process in-house and on AWS.      If a disaster or spike in load occurs, more or all of it can be routed to AWS with auto-scaling.

In a non-disaster environment, warm standby DR is not scaled for full production, but is fully functional. To help adsorb/justify cost, companies can use the DR site for non-production work, such as quality assurance, ing, etc. For hot standby DR, cost is determined by how much production traffic is handled by AWS in normal operation. In the recovery phase, companies only pay for what they use in addition and for the duration the DR site is at full scale. In hot standby, companies can further reduce the costs of their “always on” AWS servers with Reserved Instances (RIs).

Smart companies know disaster is not a matter of if, but when. According to a study done by the University of Oregon, every dollar spent on hazard mitigation, including DR, saves companies four dollars in recovery and response costs. In addition to cost savings, smart companies also view DR as critical to their survival. For example, 51% of companies that experienced a major data loss closed within two years (Source: Gartner), and 44% of companies that experienced a major fire never re-opened (Source: EBM). Again, disaster is not a ready of if, but when. Be ready.

-Josh Lowry, General Manager – West

Facebooktwittergoogle_pluslinkedinmailrss

An Introduction to CloudFormation

An Introduction to CloudFormation

One of the most powerful services in the AWS collection set is CloudFormation. It provides the ability to programmatically construct and manage a grouping of AWS resources in a predictable way.  With CloudFormation, provisioning of an AWS environment does not have to be done through single CLI commands or by clicking through the console, but can be completed through a JSON (Javascript Object Notation) formatted text file, or CloudFormation template.  With CloudFormation, you can build a few or several AWS resources into an environment automatically.  CloudFormation works with several AWS resource types; from AWS network infrastructure (VPCs, Subnets, Routing Tables, Gateways, and Network ACLs), to compute (EC2 and Auto Scaling), to database (RDS and ElastiCache), to storage (S3) components.  You can see the full list here.

The general JSON structure looks like the following:

CloudFormation1

A template has a total of six main sections; AWSTemplateFormatVersion, Description, Parameters, Mappings, Resources, Outputs.   Of these six template sections only “Resources” is required.  However it is always a good idea to have other sections like Description or Parameters. Each AWS resource has numerous resource type identifiers that are used to extend functionality of the particular resource.

Breaking Down a CloudFormation Template

Here is a simple CloudFormation template provided by AWS.  It creates a single EC2 instance:

CloudFormation2

This template uses the Description, Parameters, Resources, and Outputs template sections.  The Description section is just a short description of what the template does. In this case it says the template will, “Create an EC2 instance running the Amazon Linux 32 bit AMI.”  The next section, the Parameters section is allowing the creation of a string value called KeyPair that can be passed to the stack at time of launch.  During stack launch from the console you would see the following dialogue box where you specify all of the editable parameters for that specific launch of the template, in this case there is only one parameter named KeyPair:

CloudFormation3

Notice how the KeyPair Parameter is available for you to enter a string, as well as the description that was also provided of what you should type in the box, “The EC2 Key Pair to allow SSH access to the instance”.  This would be an existing KeyPair in the us-east-1 region that you would use to access the instance once it’s launched.

Next, in the Resources section, the name “Ec2Instance” is defined as the name of the resource and then given the AWS Resource Type “AWS::EC2::Instance”.  The AWS Resource Type defines the type of AWS resource that the template will be deploying at launch and allows you to configure properties for that particular resource.  In this example only KeyName and ImageID are being used for this AWS resource.  For the AWS Resource type “AWS::EC2::Instance“ there are several additional properties you can use in CloudFormation, you can see the full list here.  Digging deeper we see the KeyName value is a reference to the parameter KeyPair that we defined in the Parameters section of the template, thus allowing the instance that the template creates to use the key pair that we defined at launch.  Next, the ImageId is ami-3b355a52 which is an Amazon Linux 32 bit AMI in the us-east-1 region, and why we have to specify a key that exists in the that region.

Finally, there is an Outputs template section which allows you to return values to the console describing the specific resources that were created. In this example the only output defined is “InstanceID”, which is given both a description, “The InstanceId of the newly created EC2 instance”, and a value, { “Ref” : “Ec2Instance” }, which is a reference to the resource that was created.  As you can see in the picture below, the stack launched successfully and the instance id i-5362512b was created.

CloudFormation4

The Outputs section is especially useful for complex templates because it allows you to summarize in one location all of the pertinent information for your deployed stack.  For example if you deployed dozens of machines in a complex SharePoint farm, you could use the outputs section of the template to just show the public facing endpoint, helping you quickly identify the relevant information to get into the environment.

CloudFormation for Disaster Recovery

The fact that CloudFormation templates construct an AWS environment in a consistent and repeatable fashion make them the perfect tool for Disaster Recovery (DR).  By configuring a CloudFormation template to contain all of you production resources you can deploy the same set of resources in another AWS Availability Zone or another Region entirely.  Thus, if one set of resources became unavailable in a disaster scenario, a quick launch of a CloudFormation template would initialize a whole new stack of production ready components.  Built an environment manually through the console and still want to take advantage of CloudFormation for DR? You can use the CloudFormer tool.  CloudFormer helps you construct a CloudFormation template from existing AWS resources.  You can find more information here.  No matter how you construct your CloudFormation template, the final result will be the same, a complete copy of your AWS environment in the form of JSON formatted document that can be deployed over and over.

Benefits of CloudFormation

The previous example is a very simple illustration of a CloudFormation template on AWS.

Here are some highlights:

  1. With a CloudFormation template you can create identical copies of your resources repeatedly, limiting the complex deployment tasks of sometimes several hundred clicks in the console.
  2. All CloudFormation templates are simple JSON structured files that allow you to easily share them and work with them using your current source control processes and favorite editing tools.
  3. CloudFormation templates can start simple and build over time to allow the most complex environments to be repeatedly deployed.  Thus, making them a great tool for DR.
  4. CloudFormation allows you to customize the AWS resources it deploys through use of Parameters that are editable during runtime of the template. For example if you are deploying an auto scaling group of ec2 instances within a VPC it is possible to have a Parameter that allows the creator to select which size of instance will be used for the creation of the stack.
  5. It can be argued, but the best part about CloudFormation is it’s free!

-Derek Baltazar, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss