One of the challenges that many businesses struggle to overcome is how to keep up with the massive (and on-going) changes in technology and implement best practices for managing them. The Public Cloud—in particular, Hyperscale Cloud providers like AWS—has ushered in a new era of IT technology. This technology changes rapidly and is designed to provide businesses with the building blocks that allow IT organizations to focus on innovation and growth, rather than mess with things that don’t differentiate their business.
A Hyperscale Managed Services Provider (MSP) can help address a very important gap for many businesses that struggle to:
- Keep up with the frenetic pace of change in Public Cloud
- Define and use best practices to achieve superior results
- Manage their infrastructure the most efficient way possible
In most cases, Hyperscale MSPs have deep expertise, technology, and automated capabilities to deliver high-quality managed service on a hyperscale platform. And because Hyperscale MSPs are solely focused to deliver capabilities on the cloud IaaS and PaaS that today’s enterprises are using, they are well versed in the best practices and standards to achieve the right results for their clients.
So, how do you go about selecting the right MSP? The answer to this question is critical because we believe choosing the right MSP is one of the most important decisions you will make when consuming the public cloud. It is also important to note that some of the qualifications to look for when selecting a Hyperscale MSP for your business needs are obvious, while others are more elusive. I’ve included a few suggestions below to keep in mind when evaluating and selecting the right Hyperscale MSP.
Expertise on the Platform of Your Choice
First and foremost, no two public cloud providers are the same. Each provider implements MSP strategies differently—from infrastructure and redundancy, to automation and billing concepts. Secondly, it isn’t enough for a provider to tell you they have a few applications running on the platform. When looking to entrust someone with your most valuable assets, expertise is key! An important KPI for measuring the capabilities of a MSP that many businesses overlook is the provider’s depth and breadth of experience. A qualified Hyperscale MSP will have the right certifications, accreditations, and certified engineer-to-customer ratios. You may feel good about signing with a large provider because they claim a higher number of certified engineers than the smaller firms, until…you realize their certified engineer-to-customer ratio is out of whack. Having 200 certified engineers means nothing if you have 5,000+ customers. At 2nd Watch, we have more certified engineers than we do customers, and we like it that way.
The Focus is on Customer Value
This is an obvious recommendation, but it does have some nuances. Many MSPs will simply take the “Your mess for less” approach to managing your infrastructure. Our customers tell us that one of the reasons they chose 2nd Watch was our focus on the things that matter to them. There are many MSPs that have technical capabilities to manage Cloud infrastructure but not all are able to focus in on how an enterprise wants to use the Public Cloud. MSPs with the ability to understand their client’s needs and goals tailor their approach to work for the enterprise vs. making them snap to some preconceived notion of how these things should work or function. Find an MSP that is willing to make the Public Cloud work the way you want it to and your overall experience, and the outcome, will be game changing.
Optimize, Optimize, Optimize
Moving to the Public Cloud is just the first step in the journey to realizing business value and transforming IT. The Cloud is dynamic in nature, and due to that fact, it is important that you don’t rest on just a migration once you are using it. New instance types, new services, or just optimizing what you are running today are great ways to ensure your infrastructure is running at top notch. It is important to make sure your MSP has a strong, ongoing story about optimization and how they will provide it. At 2nd Watch, we break optimization into 3 categories: Financial Optimization, Technical Optimization and Operations Optimization. It is a good idea to ask your MSP how they handle these three facets of optimization and at what cadence. Keep in mind that some providers’ pricing structures can act as a disincentive for optimization. For example, if your MSP’s billing structure is based on a percentage of your total cloud spend, and they reduce that bill by 30% through optimization efforts, that means they are now getting paid less, proportionately, and are likely not motivated to do this type of optimization on a regular basis as it hurts their revenue. Alternatively, we have also seen MSPs charge extra for these types of services, so the key is to make sure you ask if it’s included and get details about the services that would be considered an extra charge.
The final qualification to look for in a Hyperscale MSP is whether they are a full-service provider. Too often, pure play MSPs are not able to provide a full service offering under their umbrella. The most common reason is that they lack professional services to assess and migrate workloads or cloud architects to build out new functionality.
Our enterprise clients tell us that one of their major frustrations is having to work with multiple vendors on a project. With multiple vendors, it is difficult to keep track of who is accountable and for what they are accountable. Why would the vendor that is migrating be motivated to make sure the application is optimized for support if they aren’t providing the support? I have heard horror stories of businesses trying to move to the cloud and becoming frustrated that multiple vendors are involved on the same workload, because the vendors blame each other for missing deadlines or not delivering key milestones or technical content. Your business will be better served by hiring an MSP who can run the full cloud-migration process—from workload assessment and migration to managing and optimizing your cloud infrastructure on an ongoing basis.
In addition to the tips I have listed above, 2nd Watch recommends utilizing Gartner’s Magic Quadrant to help evaluate the various public cloud managed service providers available to you. Gartner positioned 2nd Watch in the Leaders quadrant of the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide for our completeness of vision and ability to execute. You can download and read the full report here.
-Kris Bliesner, CTO
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document.
2nd Watch is honored to be named a leader in the 2017 Gartner “Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide” report. We want to thank our customers that have partnered with us throughout the years and our employees who are key to 2nd Watch’s continued success and the success of our customers.
What are the contributing factors to our success?
- One of the longest track records in AWS consulting services and a very close partnership with AWS. We understand the AWS environment and how to best operate within it, and we have numerous customer case studies, large-scale implementations, and a solid track record of positive customer experiences and strong customer retention. We have in-depth expertise in helping traditional businesses launch and operate digital business offerings.
- A well-established cloud migration factory. Our professional services help enterprise customers with cloud readiness assessments, security assessments and cloud governance structures. We also assist customers with re-engineering IT processes for the cloud and integrating cloud processes with other established business processes.
- Our Cloud Management Platform, which provides policy-driven governance capabilities, while still allowing the customer to fully exploit the underlying cloud platform
Gartner positioned 2nd Watch in the Leaders quadrant for its ability to execute and completeness of vision. We are all-in with AWS Cloud and are committed to the success of our clients as evidenced in our use cases.
Some of the world’s largest enterprises partner with 2nd Watch for our ability to deliver tailored and integrated management solutions that holistically and proactively encompass the operating, financial, and technical requirements for public cloud adoption. In the end, customers gain more leverage from the cloud with a lot less risk.
We look forward to continued success in 2017 and beyond through successful customer implementations and ongoing management. To find out how we can partner with your company, visit us here.
Access Gartner’s “Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide” report, compliments of 2nd Watch.
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.
Implementing Cloud Infrastructure in the Enterprise is not easy. An organization needs to think about scale, integration, security, compliance, results, reliability and many other factors. The pace of change pushes us to stay on top of these topics to help our organization realize the many benefits of Cloud Infrastructure.
Think about this in terms of running a race. The race has not changed – there are still hurdles to be cleared – hurdles before the race in practice and hurdles on the track during prime time. We bucket these hurdles into two classes: pre-adoption and operational.
Pre-adoption hurdles come in the form of all things required to make Cloud Infrastructure a standard in your enterprise. A big hurdle we often see is a clear roadmap and strategy around Cloud. What applications will be moving and when? When will new applications be built on the Cloud? What can we move without refactoring? Another common hurdle is standards. How do you ensure your enterprise can order the same thing over and over blessed by Enterprise Architecture, Security and your lawyers. Let’s examine these two major pre-adoption hurdles.
Having a clear IT strategy around Cloud Computing is key to getting effective enterprise adoption. Everyone from the CIO to the System Admin should be able to tell you how your organization will be consuming Cloud and what their role in the journey will be. In our experience at 2nd Watch, this typically involves a specific effort to analyze your current application portfolio for benefits and compatibility in the Cloud. We often help our customers define a classification matrix of applications and workloads that can move to the Cloud and categorize them into classes of applications based on the effort and benefits received from moving workloads to the Cloud. Whether you have a “Cloud First,” “Cloud Only” or another strategy for leveraging Cloud, the important thing is that your organization understands the strategy and is empowered to make the changes required to move forward.
Standardization is a challenge when it comes to implementing Cloud Computing. There are plenty of Cloud Service Providers, and there are no common standards for implementations. The good news is that AWS is quickly becoming the de facto standard for Cloud Infrastructure, and other providers are starting to follow suit.
2nd Watch works closely with our customers to define standards we call “Reference Architectures” to enable consistency in Cloud usage across business units, regions, etc. Our approach is powered by Cloud Formation and made real by Cloud Trails, enabling us to deploy standard infrastructure and be notified when someone makes a change to the standard in production (or Test/Dev, etc.). This is where the power of AWS really shines.
Imagine… A service catalog of all the different application or technology stacks that you need to deploy in your enterprise – now think about having an automated way to deploy those standards quickly and easily in minutes instead of days/weeks/months. Standards will pay dividends in helping your organization consume Cloud and maintain existing compliance and security requirements.
Operational hurdles for Cloud Computing come about due to the different types of people, processes and technology. Do the people who support your IT infrastructure understand the new technology involved in managing Cloud infrastructure? Do you have the right operational processes in place to deal with incidents involving Cloud infrastructure? Do you have the right technology to help you manage your cloud infrastructure at enterprise scale?
Here are some people related questions to ask yourself when you are looking to put Cloud infrastructure to work in your enterprise:
- How does my IT organization have to change when I move to the cloud?
- What new IT roles are going to be required as I move to the cloud?
- What type of training should be scheduled and who should attend?
- Who will manage the applications after they are moved to the cloud?
People are critical to the IT equation, and the Cloud requires IT skills and expertise. It has been our experience that organizations that take the people component seriously have a much more effective and efficient Cloud experience than those who might address it after the fact or with less purpose. Focus on your people – make sure they have the training and support they need to ensure success once you are live in the Cloud.
Cloud infrastructure uses some of the same technology your enterprise deploys today – virtualization, hypervisors, hardware, network, etc. The difference is that the experts are managing the core components and letting you build on top. This is a different approach to infrastructure and requires enterprise IT shops to consider what changes will need to be made to their process to ensure they can operationalize Cloud computing. An example: How will your process deal with host management issues like needing to reboot a group of servers if the incident originates from a provider instead of your own equipment?
Finally, technology plays a big role in ensuring a successful Cloud infrastructure implementation. As users request new features and IT responds with new technology, thought needs to be given to how the enterprise will manage that technology. How will your existing management and monitoring tools connect to your Cloud infrastructure? To what pieces of the datacenter will you be unable to attach? When will you have to use Cloud Service Provider plugins vs. your existing toolset? What can you manage with your existing tools? How do you take advantage of the new infrastructure, including batch scheduling, auto-scaling, reference architectures, etc.? Picking the right management tools and technology will go a long way to providing some of the real benefits of Cloud Infrastructure.
At 2nd Watch we believe that Enterprise Architecture (in a broad sense) is relevant regardless of the underlying technology platform. It is true that moving from on premises infrastructure to Cloud enables us to reduce the number of things demanding our focus – Amazon Web Services vs. Cisco, Juniper, F5, IBM, HP, Dell, EMC, NetApp, etc.
This is the simplicity of it – the number of vendors and platforms to deal with as an IT person is shrinking, and thank goodness! But, we still need to think about how to best leverage the technology at hand. Cloud adoption will have hurdles. The great news is that together we can train ourselves to clear them and move our businesses forward.
-Kris Bliesner, CTO
One of the things “everyone knows” about migrating to the Cloud is that it saves companies money. Now you don’t need all those expensive data centers with the very physical costs associated with them. So companies migrate to the Cloud and are so sure they will see their costs plummet… then they get their bill for Cloud usage and experience sticker shock. Typically, this is when our customers reengage 2nd Watch – they ask us why it costs so much, what they can do to decrease their costs, and of course everyone’s favorite – why didn’t you tell me it would be so much?
First, in order to know why you are spending so much you need to analyze your environment. I’m not going to go into how Amazon bills and walk you through your entire bill in this blog post. That’s something for another day perhaps. What I do want to look into is how to enable you to see what you have in your Cloud.
Step one: tag it! Amazon gives you the ability to tag almost everything in your environment, including ELB’s, which was most recently added. I always highly recommend to my customers to make use of this feature. Personally, whenever I create something manually or programmatically I add tags to identify what it is, why it’s there, and of course who is paying for it. Even in my sandbox environment, it’s a way to tell colleagues “Don’t delete my stuff!” Programmatically, tags can be added through CloudFormation, Elastic Beanstalk, auto scaling, CLI, as well as third party tools like Puppet and Chef. From a feature perspective, there are very few AWS components that don’t support tags, and more are constantly being added.
That’s all well and good, but how does this help analytics? Tagging is actually is the basis for pretty much all analytics, and without it you have to work much harder for far less information. For example, I can tag EC2 instances to indicate applications, projects, or environments. I can then run reports that look for specific tags – how many EC2 instances are associated with Project X and what are the instance types? What are the business applications using my various RDS instances? – and suddenly when you get your bill, you have the ability to determine who is spending money in your organization and work with them on spending it smartly.
Let’s take it a step further and talk about automation and intelligent Cloud management. If I tag instances properly I can automate tasks to control my Cloud based on those tags. For example, maybe I’m a nice guy and don’t make my development team work weekends. I can set up a task to shutdown any instance with “Environment = Development” tag every Friday evening and start again Monday morning. Maybe I want to have an application only online at month end. I can set up another task to schedule when it is online and offline. Tags give us the ability to see what we are paying for and the hooks to control that cost with automation.
I would be remiss if I didn’t point out that tags are an important part of using some great 2nd Watch offerings that help manage your AWS spend. Please check out 2W Insight for more information and how to gain control over and visibility into your cloud spend.
-Keith Homewood, Cloud Architect
In the article “Increasing your Cloud Footprint” we discussed the phased approach of moving a traditional environment to Amazon Web Services (AWS). You start with some of the low risk workloads like archiving and backups, move on to workloads that are a more natural fit for the cloud like disaster recovery or development accounts, and finally create POCs for production workloads. By the time you reach production workloads and work out all the kinks you should be operating full time in the cloud! OK not quite, but you will have the experience and know-how to be comfortable with what works and what doesn’t work for your organization in the cloud. Once the organization gets comfortable with AWS, it is a natural progression for more and more of the environment to migrate to the cloud. Before you know it, you have many workloads in AWS and might even have several accounts. The next big question is, what tools are available to manage the environment?
AWS provides users with several tools to manage their cloud environments. The main tools most people use when getting started are the AWS Console and the AWS CLI. The AWS console gives the ability to access and manage most AWS services through an intuitive web based interface, while the CLI is a command line based tool you can use to manage services and automate actions with scripts. For developers, AWS provides SDKs for simplifying using AWS services in applications. AWS provides an API tailored to work with several programming languages and platforms like Java, .NET, Node.js, PHP, Python, Ruby, Android, and iOS.
Along with the regular tools like the AWS Console, CLI and APIs, AWS provides IDE Toolkits that integrate into your development environment. Both the AWS Toolkit for Eclipse and the AWS Toolkit for Visual Studio make it easier for developers to develop and deploy application using AWS technologies.
The great thing about the AWS IDE Toolkits is that they are very useful even if you are not a developer. For example, if you manage multiple accounts mainly through the standard AWS console, tasks like switching between accounts can become cumbersome and unwieldy. Either you have to log in and out of each environment through your browser, always checking to make sure you are executing commands in the right environment, or you have to use multiple browsers to separate multiple accounts. Either way the process isn’t optimal. The AWS Toolkit for Visual Studio (or Eclipse) seems to solve this problem and can be handy for any AWS cloud administrator. The AWS Toolkit for Visual Studio is compatible with Visual Studio 2010, 2012, and 2013. To setup a new account you download the AWS Toolkit for visual studio here. Once installed, you add a user through the AWS explorer Profile section seen here:
You can then add an account using a Display Name, Access Key ID, Secret Access Key, and Account number. You can add multiple AWS accounts as long as you have the Access Keys for a user with the ability to access resources. See the Add Account box below:
Once you have the credentials entered for multiple accounts, you will have the ability to manage each account by just pulling down the Account dropdown. As you can see below I have two accounts “2nd Watch Prod” and “2nd Watch Dev”:
Finally, you can manage the resources in the selected account by just dropping down which account you want active and then clicking on the corresponding AWS resource you would like to manage. In the example below we are looking at the Amazon EC2 Instances for the Ireland region for another account called “2nd Watch SandBox”. You can quickly click on the Account drop down to select another account and look at the instances associated with it. Suddenly, switching between accounts is manageable and you can focus on being more productive across all your accounts!
The AWS Toolkit for Visual Studio is an extremely powerful tool. Not only is it a great tool for integrating your environment for developers, it can also serve as a great way to manage your devices on AWS. There are many services you can manage with the AWS Toolkits, but be warned, it doesn’t have them all. For example, working with auto-scale groups can be done using the CLI or through the AWS console as there is no AWS Toolkit compatibility yet. If you are interested in AWS Toolkit for Visual Studio you can see the complete instructions here.
Overall, managing your AWS environment largely depends on how you want to interact with the AWS services. If you like the GUI feel, the console or AWS Toolkits are a great match. However, if you like texted based CLI interfaces, the AWS CLI tools and SDKs are a great way to interact with AWS. Lastly, using each tool takes time to learn, but once you find the best one for your specific needs you should experience an increase in productivity that will make life using AWS that much easier.
– Derek Baltazar, Senior Cloud Engineer