1-888-317-7920 info@2ndwatch.com

Automating Windows Patching of EC2 Autoscaling Group Instances

Background

Dealing with Windows patching can be a royal pain as you may know.  At least once a month Windows machines are subject to system security and stability patches, thanks to Microsoft’s Patch Tuesday. With Windows 10 (and its derivatives), Microsoft has shifted towards more of a Continuous Delivery model in how it manages system patching. It is a welcome change, however, it still doesn’t guarantee that Windows patching won’t require a system reboot.

Rebooting an EC2 instance that is a member of an Auto Scaling Group (depending upon how you have your Auto Scaling health-check configured) is something that will typically cause an Elastic Load Balancing (ELB) HealthCheck failure and result in instance termination (this occurs when Auto Scaling notices that the instance is no longer reporting “in service” with the load balancer). Auto Scaling will of course replace the terminated instance with a new one, but the new instance will be launched using an image that is presumably unpatched, thus leaving your Windows servers vulnerable. The next patch cycle will once again trigger a reboot and the vicious cycle continues. Furthermore, if the patching and reboots aren’t carefully coordinated, it could severely impact your application performance and availability (think multiple Auto Scaling Group members rebooting simultaneously). If you are running an earlier version of Windows OS (e.g. Windows Server 2012r2), rebooting at least once a month on Patch Tuesday is an almost certainty.

Another major problem with utilizing the AWS stock Windows AMIs with Auto Scaling is that AWS makes those AMIs unavailable after just a few months. This means that unless you update your Auto Scaling Launch Configuration to use the newer AMI IDs on a continual basis, future Auto Scaling instance launches will fail as they try to access an AMI that is no longer accessible. Anguish.

Potential Solutions

Given the aforementioned scenario, how on earth are you supposed to automatically and reliably patch your Auto-Scaled Windows instances?!

One approach would be to write some sort of an orchestration layer that detects when Auto Scaling members have been patched and are awaiting their obligatory reboot, suspend Auto Scaling processes that would detect and replace perceived failed instances (e.g. HealthCheck), and then reboot the instances one-by-one. This would be rather painful to orchestrate and has a potentially severe drawback that cluster capacity is reduced by N-1 during the rebooting (maybe more if you don’t take into account service availability between reboots). Reducing capacity to N-1 might not be a big deal if you have a cluster of 20 instances but if you are running a smaller cluster of something— say 4, 3, or 2 instances—then that has a significant impact to your overall cluster capacity. And, if you are running on an Auto Scaling group with a single instance (not as uncommon as you might think) then your application is completely down during the reboot of that single member. This of course doesn’t solve the issue of expired stock AWS AMIs.

Another approach is to maintain and patch a “golden image” that the Auto Scaling Launch Configuration uses to create new instances from. If you are unfamiliar with the term, a golden-image is an operating system image that has everything pre-installed, configured, and saved in a pre-baked image file (an AMI in the case of Amazon EC2). This approach requires a significant amount of work to make this happen in a reasonably automated fashion and has numerous potential pitfalls.

While it prevents a service outage by replacing the unavailable public AMI with a stock AMI, you still need a way to reliably and automatically handle this process. Using a tool like Hashicorp’s Packer can get you partially there, but you would still have to write a number of Providers to handle the installation of Windows Update and anything else you need to do in order to prep the system for imaging. In the end, you would still have to develop or employ a fair number of tools and processes to completely automate the entire process of detecting new Windows Updates, creating a patched AMI with those updates, and orchestrating the update of your Auto Scaling Groups.

A Cloud-Minded Approach

I believe that Auto Scaling Windows servers intelligently requires a paradigm shift. One assumption we have to make is that some form of configuration management (e.g. Puppet, Chef)—or at least a basic bootstrap script executed via cfn-init/UserData—is automating the configuration of the operating system, applications, and services upon instance launch. If configuration management or bootstrap scripts are not in play, then it is likely that a golden-image is being utilized. Without one of these two approaches, you don’t have true Auto Scaling because it would require some kind of human interaction to configure a server (ergo, not “auto”) every time a new instance was created.

Both approaches (launch-time configuration vs. golden-image) have their pros and cons. I generally prefer launch-time configuration as it allows for more flexibility, provides for better governance/compliance, and enables pushing changes dynamically. But…(and this is especially true of Windows servers) sometimes launch-time configuration simply takes longer to happen than is acceptable, and the golden-image approach must be used to allow for a more rapid deployment of new Auto Scaling group instances.

Either approach can be easily automated using a solution like to the one I am about to outline, and thankfully AWS publishes new stock Windows Server AMIs immediately following every Patch Tuesday.  This means, if you aren’t using a golden-image, patching your instances is as simple as updating your Auto Scaling Launch Configuration to use the new AMI(s) and preforming a rolling replacement of the instances. Even if you are using a golden-image or applying some level of customization to the stock AMI, you can easily integrate Packer into the process to create a new patched image that includes your customizations.

The Solution

At a high level, the solution can be summarized as:

1.  An Orchestration Layer (e.g. AWS SNS and Lambda, Jenkins, AWS Step Functions) that detects and responds when new patched stock Windows AMIs have been released by Amazon.

2.  A Packer Launcher process that manages launching Packer jobs in order to create custom AMIs. Note: This step is only required If copying AWS stock AMIs to your own AWS account is desired OR if you want to apply customization to the stock AMI. Either use case requires that the custom images are available indefinitely. We solved this problem by creating a Packer Launcher process by creating an EC2 instance with a Python UserData script that launches Packer jobs (in parallel) to create copies of the new stock AMIs into our AWS account. Note: if you are using something like Jenkins, this could be handled by having Jenkins launch a local script or even a Docker container to manage launching Packer jobs.

3.  A New AMI Messaging Layer (e.g. Amazon SNS) to publish notifications when new/patched AMIs have been created

4.  Some form of an Auto Scaling Group Rolling Updater will be required to replace exiting Auto Scaling Group instances with new ones based on the Patched AMI.

Great news for anyone using AWS CloudFormation… CFT inherently supports Rolling Updates for Auto Scaling Groups! Utilizing it requires attaching an UpdatePolicy and adding a UserData or cfn-init script to notify CloudFormation when the instance has finished its configuration and is reporting as healthy (e.g. InService on the ELB). There are some pretty good examples of how to accomplish this using CloudFormation out there, but here is one specifically that AWS provides as an example.

If you aren’t using CloudFormation, all hope is not lost. With Hashicorp Terraform’s ever increasing popularity for deploying and managing AWS infrastructure as code, Terraform has still yet to implement a Rolling Update feature for AWS Auto Scaling Groups. There is a Terraform feature request from a few years ago for this exact feature, but as of today, it is not yet available, nor do the Terraform developers have any short-term plans to implement it. However, several people (including Hashicorp’s own engineers) have developed a number of ways to work around the lack of an integrated Auto Scaling Group Rolling Updater in Terraform. Here are a few I like:

a.  Using a nested CloudFormation Template to manage the AutoScaling group (and utilizing AutoScalings Update Policy as described above)

b.  Using Terraform’s create_before_destroy and min_elb_capacity parameters to gracefully create replacement Auto Scaling Groups and Launch Configurations

c.  Utilizing the make system as a wrapper to augment the previous approach

Of course, you can always roll your own solution using a combination of AWS services (e.g. SNS, Lambda, Step Functions), or whatever tooling best fits your needs. Creating your own solution will allow you added flexibility if you have additional requirements that can’t be met by CloudFormation, Terraform, or other orchestration tool.

The following is an example framework for performing automated Rolling Updates to Auto Scaling Groups utilizing AWS SNS and AWS Lambda:

a.  An Auto Scaling Launch Config Modifier worker that subscribes to the New AMI messaging layer performs an update to the Auto Scaling Launch Configuration(s) when a new AMI is released. In this use case, we are using an AWS Lambda function to subscribe to an SNS topic. Upon notification of new AMIs, the worker must then update the predefined (or programmatically derived) Auto Scaling Launch Configurations to use the new AMI. This is best handled by using infrastructure templating tools like CloudFormation or Terraform to make updating the Auto Scaling Launch Configuration ImageId as simple as updating a parameter/variable in the template and performing an update/apply operation.

b.  An Auto Scaling Group Instance Cycler messaging layer (again, an Amazon SNS topic) to be notified when an Auto Scaling Launch Configuration ImageId has been updated by the worker.

c.  An Auto Scaling Group Instance Cycler worker that will perform replacing the Auto Scaling Group instances in a safe, reliable, and automated fashion. For example, another AWS Lambda function that will subscribe to the SNS topic and trigger new instances by increasing the Auto Scaling Desired Instance count to a value of twice the current number of ASG instances.

d.  Once the scale-up event generated by the Auto Scaling Group Instance Cycler worker has completed and the new instances are reporting as healthy, another message will be published to the Auto Scaling Group Instance Cycler SNS topic indicating scale-up has completed.

e.  The Auto Scaling Group Instance Cycler worker will respond to the prior event and return the Auto Scaling group back to its original size which will terminate the older instances leaving the Auto Scaling Group with only the patched instances launched from the updated AMI. This assumes that we are utilizing the default AWS Auto Scaling Termination Policy which ensures that instances launched from the oldest Launch Configurations are terminated first.

NOTE: The AWS Auto Scaling default termination policy will not guarantee that the older instances are terminated first! If the Auto Scaling Group is spanned across multiple Availability Zones (AZ) and there is an imbalance in the number of instances in each AZ, it will terminate the extra instance(s) in that AZ before terminating based on the oldest Launch Configuration. Terminating on Launch Configuration age will certainly ensure that the oldest instances will be replaced first. My recommendation is to use the OldestInstance termination policy to make absolutely certain that the oldest (i.e. unpatched) instances are terminated during the Instance Cycler scale-down process.  Consult the AWS documentation on the Auto Scaling termination policies for more on this topic.

In Conclusion

Whichever solution you choose to implement to handle the Rolling Updates to your Auto Scaling Group, the solution outlined above will provide you with a sure-fire way to ensure your Windows Auto Scaled servers are always patched automatically and minimize the operational overhead for ensuring patch compliance and server security. And the good news is that the heavy lifting is already being handled by AWS Auto Scaling and Hashicorp Packer. There is a bit of trickery to getting the Packer configs and provisioners working just right with the EC2 Config service and Windows Sysprep, but there are a number of good examples out on github to get you headed in the right direction. The one I referenced in building our solution can be found here.

One final word of caution... if you do not disable the EC2Config Set Computer Name option when baking a custom AMI, your Windows hostname will ALWAYS be reset to the EC2Config default upon reboot. This is especially problematic for configuration management tools like Puppet or Chef which may use the hostname as the SSL Client Certificate subject name (default behavior), or for deriving the system role/profile/configuration.

Here is my ec2config.ps1 Packer provisioner script which disables the Set Computer Name option:

$EC2SettingsFile="C:\\Program
Files\\Amazon\\Ec2ConfigService\\Settin
gs\\Config.xml"
$xml = [xml](get-content
$EC2SettingsFile)
$xmlElement =
$xml.get_DocumentElement()
$xmlElementToModify =
$xmlElement.Plugins
foreach ($element in
$xmlElementToModify.Plugin)
{
if ($element.name -eq
"Ec2SetPassword")
{
$element.State="Enabled"
}
elseif ($element.name -eq
"Ec2SetComputerName")
{
$element.State="Disabled"
}
elseif ($element.name -eq
"Ec2HandleUserData")
{
$element.State="Enabled"
}
elseif ($element.name -eq
"Ec2DynamicBootVolumeSize")
{
$element.State="Enabled"
}
}
$xml.Save($EC2SettingsFile)

Hopefully, at this point, you have a pretty good idea of how you can leverage existing software, tools, and services—combined with a bit of scripting and automation workflow—to reliably and automatically manage the patching of your Windows Auto Scaling Group EC2 instances!  If you require additional assistance, are resource-bound for getting something implemented, or you would just like the proven Cloud experts to manage Automating Windows Patching of your EC2 Autoscaling Group Instances, contact 2nd Watch today!

 

Disclaimer

We strongly advise that processes like the ones described in this article be performed on a environment prior to production to properly validate that the changes have not negatively affected your application’s functionality, performance, or availability.

 This is something that your orchestration layer in the first step should be able to handle. This is also something that should integrate well with a Continual Integration and/or Delivery workflow.

 

-Ryan Kennedy, Principal Cloud Automation Architect, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

How to Choose the Right Hyperscale Managed Service Provider (MSP)

One of the challenges that many businesses struggle to overcome is how to keep up with the massive (and on-going) changes in technology and implement best practices for managing them.  The Public Cloud­—in particular, Hyperscale Cloud providers like AWS—has ushered in a new era of IT technology. This technology changes rapidly and is designed to provide businesses with the building blocks that allow IT organizations to focus on innovation and growth, rather than mess with things that don’t differentiate their business.

A Hyperscale Managed Services Provider (MSP) can help address a very important gap for many businesses that struggle to:

  • Keep up with the frenetic pace of change in Public Cloud
  • Define and use best practices to achieve superior results
  • Manage their infrastructure the most efficient way possible

 

In most cases, Hyperscale MSPs have deep expertise, technology, and automated capabilities to deliver high-quality managed service on a hyperscale platform. And because Hyperscale MSPs are solely focused to deliver capabilities on the cloud IaaS and PaaS that today’s enterprises are using, they are well versed in the best practices and standards to achieve the right results for their clients.

So, how do you go about selecting the right MSP?  The answer to this question is critical because we believe choosing the right MSP is one of the most important decisions you will make when consuming the public cloud.  It is also important to note that some of the qualifications to look for when selecting a Hyperscale MSP for your business needs are obvious, while others are more elusive.  I’ve included a few suggestions below to keep in mind when evaluating and selecting the right Hyperscale MSP.

Expertise on the Platform of Your Choice

First and foremost, no two public cloud providers are the same.  Each provider implements MSP strategies differently—from infrastructure and redundancy, to automation and billing concepts.  Secondly, it isn’t enough for a provider to tell you they have a few applications running on the platform.  When looking to entrust someone with your most valuable assets, expertise is key!  An important KPI for measuring the capabilities of a MSP that many businesses overlook is the provider’s depth and breadth of experience. A qualified Hyperscale MSP will have the right certifications, accreditations, and certified engineer-to-customer ratios.  You may feel good about signing with a large provider because they claim a higher number of certified engineers than the smaller firms, until…you realize their certified engineer-to-customer ratio is out of whack.  Having 200 certified engineers means nothing if you have 5,000+ customers.  At 2nd Watch, we have more certified engineers than we do customers, and we like it that way.

The Focus is on Customer Value

This is an obvious recommendation, but it does have some nuances.  Many MSPs will simply take the “Your mess for less” approach to managing your infrastructure.  Our customers tell us that one of the reasons they chose 2nd Watch was our focus on the things that matter to them.  There are many MSPs that have technical capabilities to manage Cloud infrastructure but not all are able to focus in on how an enterprise wants to use the Public Cloud.  MSPs with the ability to understand their client’s needs and goals tailor their approach to work for the enterprise vs. making them snap to some preconceived notion of how these things should work or function.  Find an MSP that is willing to make the Public Cloud work the way you want it to and your overall experience, and the outcome, will be game changing.

Optimize, Optimize, Optimize

Moving to the Public Cloud is just the first step in the journey to realizing business value and transforming IT.  The Cloud is dynamic in nature, and due to that fact, it is important that you don’t rest on just a migration once you are using it.  New instance types, new services, or just optimizing what you are running today are great ways to ensure your infrastructure is running at top notch.  It is important to make sure your MSP has a strong, ongoing story about optimization and how they will provide it.  At 2nd Watch, we break optimization into 3 categories:  Financial Optimization, Technical Optimization and Operations Optimization.  It is a good idea to ask your MSP how they handle these three facets of optimization and at what cadence.  Keep in mind that some providers’ pricing structures can act as a disincentive for optimization.  For example, if your MSP’s billing structure is based on a percentage of your total cloud spend, and they reduce that bill by 30% through optimization efforts, that means they are now getting paid less, proportionately, and are likely not motivated to do this type of optimization on a regular basis as it hurts their revenue.  Alternatively, we have also seen MSPs charge extra for these types of services, so the key is to make sure you ask if it’s included and get details about the services that would be considered an extra charge.

Full Service

The final qualification to look for in a Hyperscale MSP is whether they are a full-service provider.  Too often, pure play MSPs are not able to provide a full service offering under their umbrella.  The most common reason is that they lack professional services to assess and migrate workloads or cloud architects to build out new functionality.

Our enterprise clients tell us that one of their major frustrations is having to work with multiple vendors on a project.  With multiple vendors, it is difficult to keep track of who is accountable and for what they are accountable.  Why would the vendor that is migrating be motivated to make sure the application is optimized for support if they aren’t providing the support?  I have heard horror stories of businesses trying to move to the cloud and becoming frustrated that multiple vendors are involved on the same workload, because the vendors blame each other for missing deadlines or not delivering key milestones or technical content.  Your business will be better served by hiring an MSP who can run the full cloud-migration process—from workload assessment and migration to managing and optimizing your cloud infrastructure on an ongoing basis.

In addition to the tips I have listed above, 2nd Watch recommends utilizing Gartner’s Magic Quadrant to help evaluate the various public cloud managed service providers available to you. Gartner positioned 2nd Watch in the Leaders quadrant of the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide for our completeness of vision and ability to execute.  You can download and read the full report here.

 

-Kris Bliesner, CTO

 

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document.

Facebooktwittergoogle_pluslinkedinmailrss

Why buy Amazon Web Services through a Partner and who “owns” the account?

As an AWS Premier Partner and audited, authorized APN Managed Service Provider (MSP), 2nd Watch offers comprehensive services to help customers accelerate their journey to the cloud.  For many of our customer we not only provide robust Managed Cloud Services, we also resell Amazon products and services.  What are the advantages for customers who purchase from a value-added AWS reseller?  Why would a customer do this? With an AWS reseller, who owns the account? These are all great questions and the subject of today’s blog post.

I am going to take these questions in reverse order and deal with the ownership issue first, as it is the most commonly misconstrued part of the arrangement.  Let me be clear – when 2nd Watch resells Amazon Web Services as an AWS reseller, our customer “owns” the account.  At 2nd Watch we work hard every day to earn our customers’ trust and confidence and thereby, their business.  Our pricing model for Managed Cloud Services is designed to leverage the advantages of cloud computing’s consumption model – pay for what you use.  2nd Watch customers who purchase AWS through us have the right to move their account to another MSP or purchase direct from AWS if they are unhappy with our services.

I put the word “own” in quotes above because I think it is worth digressing for a minute on how different audiences interpret that word.  Some people see the ownership issue as a vendor lock-in issue, some as an intellectual property concern and still others a liability and security requirement.  For all of these reasons it is important we are specific and precise with our language.

With 2nd Watch’s Managed Cloud Services consumption model you are not locked-in to 2nd Watch as your AWS reseller or MSP.  AWS Accounts and usage purchased through us belong to the customer, not 2nd Watch, and therefore any intellectual property contained therein is the responsibility and property of the customer.  Additionally, as the account owner, a customer’s AWS accounts use a shared responsibility model.  With regards to liability and security, however, our role as an MSP can be a major benefit.

Often MSP’s “govern or manage” the IAM credentials for an AWS account to ensure consistency, security and governance.  I use the words govern or manage and not “own” as I want to be clear that the customer still has the right to take back the credentials and overall responsibility for managing each account, which is the opposite of lock-in.  So why would a customer want their MSP to manage their credentials?  The reason is pretty simple; similar to a managed data center or colocation facility, you own the infrastructure, but you hire experts to manage the day-to-day management for increased limits of liability, security and enhanced SLA’s.

Simply put, if you, as a customer, want your MSP to carry the responsibility for your AWS account and provide service level agreements (complete with financial repercussions), you are going to want to make sure administrative access to the environment is limited with regards to who can make changes that may impact stability or performance.  As a 2nd Watch Managed Cloud Services customer, allowing us to manage IAM credentials also comes with the benefit of our secure SOC 2 Type 2 (audited) compliant systems and processes.  Often our security controls exceed the capabilities of our customers.

Also worth noting – as we on-board a Managed Cloud Services customer, we often will audit their environment and provide best practice recommendations.  These recommendations are aligned with the excellent AWS Well Architected framework and help customers achieve greater stability, performance, security and cost optimization.  Our customers have the option of completing the remediation or having 2nd Watch perform the remediation.  Implementing best practices for managing user access along with leveraging cutting edge technology results in a streamlined journey to the cloud.

So now we have addressed the question of who owns the account, but we haven’t addressed why a customer would want to procure AWS through a partner.  First, see my earlier blog post regarding Cloud Cost Complexity for some background.  Second, buying AWS through 2nd Watch as an AWS partner, or AWS reseller, comes with several immediate advantages:

  • All services are provided at AWS market rates or better.
  • Pass through all AWS volume tier discounts and pricing
  • Pass through AWS Enterprise Agreement terms, if one exists
  • Solution based and enhanced SLA’s (above and beyond what AWS provides) shaped around your business requirements
  • Familiarity with your account – our (2) U.S. based NOC’s are staffed 24x7x365 and have access to a comprehensive history of your account and governance policies.
  • Access to Enterprise class support including 2nd Watch’s multiple dedicated AWS Technical Account Managers with Managed Cloud Services agreements
  • Consolidate usage across many AWS accounts (see AWS volume discount tiers above)
  • Consolidated billing for both Managed Cloud Services and AWS Capacity
  • Access to our Cloud Management Platform, a web-based console, greatly simplifies the management and analysis of AWS usage
    • Ability to support complex show-back or charge-back bills for different business units or departments as well as enterprise-wide roll-ups for a global view
    • Ability to allocate Volume and Reserved Instance discounts to business units per your requirements
    • Set budgets with alerts, trend analysis, tag reporting, etc.
  • Ability to provide Reserved Instance recommendations and management services
    • Helps improve utilization and prevent spoilage
  • You can select the level of services for Managed Cloud Services on any or all accounts – you can consolidate your purchasing without requiring services you don’t need.
  • Assistance with AWS Account provisioning and governance – we adhere to your corporate standards (and make pro-active recommendations).

In short, buying your AWS capacity through 2nd Watch as your MSP is an excellent value that will help you accelerate your cloud adoption.  We provide the best of AWS with our own services layered on top to enhance the overall offering.  Please contact us for more information about our Managed Cloud Services including Managed AWS Capacity, and 2nd Watch as an AWS Reseller Partner.

-Marc Kagan, Director, Account Management

Facebooktwittergoogle_pluslinkedinmailrss

Raising the Bar – Managing Enterprise Cloud Workloads

Momentum continues to build for companies who are migrating their workloads to the cloud, across all industries, even highly regulated industries such as Financial Services, Health Care, and Government. And it’s not just for small companies and startups. Most of the largest companies in the world – we’re talking Fortune 500 here – are adopting rapid and aggressive strategies for migrating and managing their workloads in the cloud. While the benefits of migrating workloads to the cloud are seemingly obvious (cost savings, of course), the “hidden” benefits exist in the fact that the cloud allows businesses to be more nimble, enabling business users with faster, more powerful, and more scalable business capabilities than they’ve ever had before.

So what do enterprises care about when managing workloads in the cloud? More importantly, what should you care about? Let’s assume, for the sake of argument, that your workloads are already in the cloud – that you’ve adopted a sound methodology for migrating your workloads to the cloud.

GraphRaise your expectations I would submit that enterprises should raise their expectations from “standard” workload management. Why? Because the cloud provides a more flexible, powerful, and scalable paradigm than the typical application-running-in-a-data-center-on-a-bunch-of-servers model. Once your workloads are in the cloud, the basic requirements for managing them are not dissimilar to what you’d expect today for managing workloads on-premise or in a data center.

 

The basics include:

  • Service Levels: Basic service levels are still just that – basic service levels – Availability, response time, capacity, support, monitoring, etc. So what’s different in the cloud world? You should pay particular attention to ensuring your personal data is protected in your hosted cloud service.
  • Support: Like any hosting capability, support is very important to consider. Does your provider provide online, call center, dedicated, and/or a combo platter of all of these?
    • Security: Ensure that your provider has robust security measures in place and mechanisms to preserve your applications and data
  • Compliance: You should ensure your cloud provider is in compliance with the standards for your specific industry. Privacy, security and quality are principal compliance areas to evaluate and ensure are being provided.

 

Now what should enterprises expect on top of the “basics?”

  • Visibility: When your workloads are in the cloud, you can’t see them anymore. No longer will you be able to walk through the datacloud gears center and see your racks of servers with blinking lights, but there’s a certain comfort in that, right? So when you move to the cloud, you need to be able to see (ideally in a visual paradigm) the services that you’re using to run your critical workloads
  • Be Proactive: It used to be that enterprises only cared if their data center providers/data center guys were just good at being “reactive” (responding to tickets, monitoring apps and servers, escalating issues, etc). But now the cloud allows us to be proactive. How can you optimize your infrastructure so you actually use less, rather than more? Wouldn’t it be great if your IT operations guy came to you and said “Hey, we can decrease our footprint and lessen our spend,” rather than the other way around?
  • Partner with the business: Now that your workloads are running in the cloud, your IT ops team can focus more on working with the business/applications teams to understand better how the infrastructure can work for them, again rather than the other way around, and they can educate the business/applications teams on how some of the newest cloud services, like elasticity, big data, unstructured data, auto-scaling, etc., can cause the business to think differently and innovate faster.

 

Enterprises should – and are – raising their expectations as they relate to managing their workloads in the cloud. Why? Because the cloud provides a more flexible, powerful, and scalable paradigm than the typical hardware-centric, data center-focused approach.

-Keith Carlson, EVP of Professional and Managed Workload Services

Facebooktwittergoogle_pluslinkedinmailrss

Managed Services for Cloud Infrastructure

Managed Services is the practice of outsourcing day-to-day management responsibilities of cloud infrastructure as a strategic method for improving IT operations and reducing expenses. Managed Services removes the customer’s burden of nuts-and-bolts, undifferentiated work and enables them to focus more on value creating activities that directly impact their core business. In accordance with its expertise in Amazon Web Services (AWS) and enterprise IT operations, 2nd Watch has built a best-in-class Managed Services practice with the following capabilities.

Management
* Escalation Management – Collaboration between MS and professional services on problem resolution.
* Incident Management – Resolution of day-to-day tickets with response times based on severity.
* Problem Management – Root cause analysis and resolution for recurring incidents.

Monitoring
* Alarming Services – CPU utilization, data backup, space availability thresholds, etc.
* Reporting Services – Cost optimization, incidents, SLA, etc.
* System Reliability – 24×7 monitoring at a world-class Network Operations Center (NOC).

Other
* Audit Support – Data pulls, process documents, staff interviews, etc.
* Change Management – Software deployment, patching, ing, etc.
* Service-Level Agreement – Availability and uptime guarantees, including 99.9% and 99.99%.

2nd Watch’s Managed Services practice provides a single support organization for customers running their cloud infrastructure on AWS. Customers benefit from improved IT operations and reduced expenses. Customers also benefit from Managed Services identifying problems before they become critical business issues impacting availability and uptime. Costs generally involve a setup or transition fee and an ongoing, fixed monthly fee making support costs predictable. Shift your focus to valuation creation and let Managed Services do the undifferentiated heavy lifting.

All contents copyright © 2013, Josh Lowry. All rights reserved.

-Josh Lowry, General Manager West

Facebooktwittergoogle_pluslinkedinmailrss