1-888-317-7920 info@2ndwatch.com

How to Choose the Right Hyperscale Managed Service Provider (MSP)

One of the challenges that many businesses struggle to overcome is how to keep up with the massive (and on-going) changes in technology and implement best practices for managing them.  The Public Cloud­—in particular, Hyperscale Cloud providers like AWS—has ushered in a new era of IT technology. This technology changes rapidly and is designed to provide businesses with the building blocks that allow IT organizations to focus on innovation and growth, rather than mess with things that don’t differentiate their business.

A Hyperscale Managed Services Provider (MSP) can help address a very important gap for many businesses that struggle to:

  • Keep up with the frenetic pace of change in Public Cloud
  • Define and use best practices to achieve superior results
  • Manage their infrastructure the most efficient way possible

 

In most cases, Hyperscale MSPs have deep expertise, technology, and automated capabilities to deliver high-quality managed service on a hyperscale platform. And because Hyperscale MSPs are solely focused to deliver capabilities on the cloud IaaS and PaaS that today’s enterprises are using, they are well versed in the best practices and standards to achieve the right results for their clients.

So, how do you go about selecting the right MSP?  The answer to this question is critical because we believe choosing the right MSP is one of the most important decisions you will make when consuming the public cloud.  It is also important to note that some of the qualifications to look for when selecting a Hyperscale MSP for your business needs are obvious, while others are more elusive.  I’ve included a few suggestions below to keep in mind when evaluating and selecting the right Hyperscale MSP.

Expertise on the Platform of Your Choice

First and foremost, no two public cloud providers are the same.  Each provider implements MSP strategies differently—from infrastructure and redundancy, to automation and billing concepts.  Secondly, it isn’t enough for a provider to tell you they have a few applications running on the platform.  When looking to entrust someone with your most valuable assets, expertise is key!  An important KPI for measuring the capabilities of a MSP that many businesses overlook is the provider’s depth and breadth of experience. A qualified Hyperscale MSP will have the right certifications, accreditations, and certified engineer-to-customer ratios.  You may feel good about signing with a large provider because they claim a higher number of certified engineers than the smaller firms, until…you realize their certified engineer-to-customer ratio is out of whack.  Having 200 certified engineers means nothing if you have 5,000+ customers.  At 2nd Watch, we have more certified engineers than we do customers, and we like it that way.

The Focus is on Customer Value

This is an obvious recommendation, but it does have some nuances.  Many MSPs will simply take the “Your mess for less” approach to managing your infrastructure.  Our customers tell us that one of the reasons they chose 2nd Watch was our focus on the things that matter to them.  There are many MSPs that have technical capabilities to manage Cloud infrastructure but not all are able to focus in on how an enterprise wants to use the Public Cloud.  MSPs with the ability to understand their client’s needs and goals tailor their approach to work for the enterprise vs. making them snap to some preconceived notion of how these things should work or function.  Find an MSP that is willing to make the Public Cloud work the way you want it to and your overall experience, and the outcome, will be game changing.

Optimize, Optimize, Optimize

Moving to the Public Cloud is just the first step in the journey to realizing business value and transforming IT.  The Cloud is dynamic in nature, and due to that fact, it is important that you don’t rest on just a migration once you are using it.  New instance types, new services, or just optimizing what you are running today are great ways to ensure your infrastructure is running at top notch.  It is important to make sure your MSP has a strong, ongoing story about optimization and how they will provide it.  At 2nd Watch, we break optimization into 3 categories:  Financial Optimization, Technical Optimization and Operations Optimization.  It is a good idea to ask your MSP how they handle these three facets of optimization and at what cadence.  Keep in mind that some providers’ pricing structures can act as a disincentive for optimization.  For example, if your MSP’s billing structure is based on a percentage of your total cloud spend, and they reduce that bill by 30% through optimization efforts, that means they are now getting paid less, proportionately, and are likely not motivated to do this type of optimization on a regular basis as it hurts their revenue.  Alternatively, we have also seen MSPs charge extra for these types of services, so the key is to make sure you ask if it’s included and get details about the services that would be considered an extra charge.

Full Service

The final qualification to look for in a Hyperscale MSP is whether they are a full-service provider.  Too often, pure play MSPs are not able to provide a full service offering under their umbrella.  The most common reason is that they lack professional services to assess and migrate workloads or cloud architects to build out new functionality.

Our enterprise clients tell us that one of their major frustrations is having to work with multiple vendors on a project.  With multiple vendors, it is difficult to keep track of who is accountable and for what they are accountable.  Why would the vendor that is migrating be motivated to make sure the application is optimized for support if they aren’t providing the support?  I have heard horror stories of businesses trying to move to the cloud and becoming frustrated that multiple vendors are involved on the same workload, because the vendors blame each other for missing deadlines or not delivering key milestones or technical content.  Your business will be better served by hiring an MSP who can run the full cloud-migration process—from workload assessment and migration to managing and optimizing your cloud infrastructure on an ongoing basis.

In addition to the tips I have listed above, 2nd Watch recommends utilizing Gartner’s Magic Quadrant to help evaluate the various public cloud managed service providers available to you. Gartner positioned 2nd Watch in the Leaders quadrant of the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide for our completeness of vision and ability to execute.  You can download and read the full report here.

 

-Kris Bliesner, CTO

 

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document.

Facebooktwittergoogle_pluslinkedinmailrss

Cloud Cost Complexity: Bringing the unknown unknowns to light

When first speaking to mid-size and large enterprises considering embracing the Amazon Web Services (AWS) cloud, the same themes come up consistently.  Sometimes it comes out explicitly and sometimes it is just implied, but one item that nearly all are apprehensive about is their discomfort with “unknown unknowns” (the stuff you don’t even know that you don’t know). They recognize that AWS represents a paradigm shift in how IT services are provisioned, operated, and paid for, but they don’t know where that shift might trip them up or where it will create gaps in their existing processes.  This is a great reason to work with an AWS Premier Partner, but that is a story for another day.

Let’s talk about one of the truly unknown unknowns – AWS Cost Accounting.  The pricing for Amazon Web Services is incredibly transparent.  The price for each service is clearly labeled online and publicly available.  Amazon’s list prices are the same for all customers, and the only discounts come in the form of volume discounts based on usage, or Reserved Instances (RIs).  So if all of this is so transparent, how can this be an unknown unknown?  The devil is in the details.

The scenario nearly always plays out the same way.  An enterprise starts by dipping a toe into the AWS waters.  It starts with one account, then two or three. Six months later they have 10 or 20 AWS accounts.  This is a good thing. AWS is designed to be easy to consume – Nothing more than a credit card is required to get started.  The challenge comes when your organization moves to consolidated invoice billing.  Your organization may be doing this because you want central procurement to manage the payments, you want to pool your volume for discounts, or it may be as simple as wanting it off your credit card. Either way, you now have an AWS bill that might not be what was expected (the unknown unknown).

If you have ever seen an AWS bill, you know they contain a phenomenal amount of useful information.  Amazon provides a spreadsheet monthly with every line item that was billed for the period with amazing detail and precision.  The down side of this wealth of information is that once you start accumulating several AWS accounts on the same consolidated bill, the bill becomes exponentially more difficult to rationalize and track your costs.

In contrast to the unknown unknown, the ability to accurately attribute per-workload costs is one of AWS’ best features and a strong attractor to AWS.  For many organizations, the ability to provide showback or chargeback bills to business units is extraordinarily valuable.  Once a business unit can see the direct costs of their IT resources they can make more informed business decisions.  It is amazing how often HA and DR requirements get adjusted when a business unit can calculate the cost / benefit of each option.

Along with the apprehension of unknown unknowns, many organizations are both excited and a little scared of going to a truly variable cost model.  They are used to knowing what their costs are (even if they are over provisioned).  The idea that they won’t know what the workload will cost until it is up and running on AWS can be a scary one.  This fear can be flipped into a virtue – try it!  Run a quick POC and the workload for performance, cost etc.  See if it works for your use case.  If it does, great; if not, it didn’t cost much to find out.

Managing your costs in AWS means more than just deciphering your bill this month.  It also means the ability to track historical spend by service and interpret the results.  Business units need to understand why their portion of the bill is going up or down and what is driving the change.

The solution to the cost accounting challenge is to use a cost accounting tool specific to AWS.  As Amazon is quick to point out, the pricing model, while transparent, is also fluid.  They have dropped pricing on various services more than 50 times in the last few years.  To effectively manage AWS costs, users want a comprehensive solution that can take a consolidated bill and make it easy to generate real insights.  Most on-premise or co-located solutions cannot match the granularity and accuracy of AWS with a properly implemented cost accounting tool.  With the right tool you can take one of the unknown unknowns and make it a powerful advantage for your journey to the public cloud!

2nd Watch offers software and services that simplify your cloud billing as part of our Managed Billing solution.  This solution expands upon our industry-leading cloud accounting platform with a trained concierge to help facilitate billing questions, making analyzing, budgeting, tracking, forecasting and invoicing the cost of the public cloud easier. Our Managed Billing Service lets you accurately allocate deployment expenses to your financial reporting structure and provides business insights through detailed usage analytics and budget reporting. We offer these services for free to our Managed Services customers.  Find out more at www.2ndwatch.com/Managed-Cloud.

-By Marc Kagan, Managed Cloud Specialist

Facebooktwittergoogle_pluslinkedinmailrss

Introducing the Scheduled Reserved Instance

Amazon Web Services will continue to lower its prices for IaaS (Infrastructure as a Service) and PaaS (Platforms as a Service) for a number of different reasons. But that doesn’t mean that your public cloud costs will go down over time. Over the past 2 years I’ve seen SMB’s and Enterprise firms surprised by rising cloud costs despite the falling rates. How does this happen? And how can your business get ahead of the problem?

Let’s examine how AWS can lower its rates over and over again.

First is the concept of capacity planning, which is much different in the public cloud when compared to the traditional days of voice and data infrastructure. In the “ole days” we used the 40-60-80 rule. Due to the lengthy lead times to order circuits, rack equipment, run cables and go-live, enterprise IT organizations used 40-60-80 as triggers for when to act on new capacity building activities. At 40% utilization, the business would begin planning for capacity expansion. At 60% utilization, new capacity would be ordered. At 80% utilization, the new capacity would be turned up and ready for go-live. All this time, IT planners would run around from Business Unit to Business Unit trying to gather their usage forecasts and growth plans for the next 12-24 months. It was a never ending cycle. Wow – that was exhausting!

Second is the well-known concept of Economies of Scale, which affords AWS cost advantages due to the sheer size, scale and output of its operations globally. Simply put, more customers will lead to more usage, and Amazon’s fixed costs will be spread over more customers. As a result, the cost per unit (EC2 usage hour, Mbps of Data Transfer, or Gigabyte of S3 storage) will decrease. A lower cost per unit allows Amazon to safely lower its prices and lead the market in public cloud adoption.

In the public cloud world, Amazon can gauge customer commitment, capacity planning and growth estimates by offering reservations for its infrastructure – otherwise known as Reserved Instances. Historically Reserved Instances come in six different types – No Upfront, Partial Upfront and Full Upfront (referring to the initial down payment amount) and offered in a 1-year or 3-year commitment. No Upfront RI’s have the lowest discount factor over the commitment term, and Full Upfront RI’s have the highest discount factor. With the help of Reserved Instances, AWS has been able to plan its capacity in each region by offering customers a discount for their extended commitment. Genius!

But it gets better. In January, AWS released a new type of Reserved Instance that gives the customer more time control and also provides Amazon with more insight into what time of day the AWS resource will be used. Why is this new “Scheduled Reserved Instance” important?

Well, a traditional RI is most effective when the instance runs all day and all year. There is a breakeven point for each RI type, but for simplicity let’s assume that the resource should be always-on to achieve the maximum savings.

However a Scheduled Reserved Instance allows the customer to designate which hours of which day the resource will run. Common use cases include month-end reporting, daily financial risk calculations, nightly genome sequencing, or any regularly scheduled batch job.

Before the Scheduled RI, the customer had 3 options – (1) run the compute on-demand and pay the highest price, (2) reserve the compute with a Standard RI and waste the time when the job’s not running (known as spoilage), or (3) try to run it on Spot Instances and hope their bid is met with available capacity. Now there’s a fourth option – The Scheduled Reserved Instance. Savings are lower, typically in the 5-10% range compared to on-demand rates, but the customer has incredible flexibility in scheduling hours and recurrence. Oh yeah – and did I mention that AWS can now sell even more excess capacity at a discount?

With so many options available to AWS customers, it’s important to find an AWS Premier Partner that can analyze each cloud workload and recommend the right mix of cost-reducing techniques. Whether the application usage pattern is steady state, spiky predictable, or uncertain-unpredictable, there is a combination of AWS solutions designed to save money and still maintain performance. Contact 2nd Watch today to learn more about Cloud Cost Optimization Reports.

-Zach Bonugli, Managed Cloud Specialist

Facebooktwittergoogle_pluslinkedinmailrss