1-888-317-7920 info@2ndwatch.com

4 Things to Watch for at AWS re:Invent 2016

2nd Watch is always buzzing before AWS re:Invent as we get ready for a great conference full of breakout sessions, boot camps, certifications, keynotes and – you can count on it – some big AWS announcements.  As a Platinum sponsor we are committed to the success of the conference and enjoy having the opportunity to connect with clients and prospective clients.  We are also excited to hear what new services and features AWS will be launching to make the public cloud better, faster and easier to adopt.  Here are the three areas where we expect AWS to make some announcements:

1. Services and Features to Drive Further Enterprise Adoption

Cloud adoption has moved beyond the startups towards Enterprise adoption.  The majority of Fortune 500 companies have adopted the cloud in some form or another with more and more moving to a cloud-first approach.  There is still a very large portion of enterprise workloads that live in datacenters and colocation facilities.  AWS will continue to add services to enable customers to run these workloads in the cloud.  In the past we have seen AWS CloudTrail, AWS Config, and other services that enable governance and security.  We expect this to continue and will be watching out for services that provide the governance and security that Enterprises need.

2. Software Development Life Cycle Services

AWS has released a number of tools that make it easier for developers and operations people to manage the software development and deployment process.  CodeCommit, CodePipeline, and CodeDeploy can be included in CloudFormation templates to enable Infrastructure as Code to include the tools necessary to enable development teams.  This is a trend we expect to continue with AWS doing more in this space.  The API-native services and features of AWS have always made it popular with folks who write code.  We expect this trend to continue and for AWS to further enable developers with new services and tools.

3. Beyond AWS

We were excited to see AWS reach beyond themselves to offer Amazon Linux as a container image.  Previously the tools and services that ran outside of the cloud enabled customers to bring workloads to the cloud.  Those include AWS Application Discovery Service, Amazon Storage Gateway, AWS Snowball, and AWS Database Migration Service.  These services and tools all exist to enable customers to come to the cloud.  Offering their operating system as a Linux Container makes it possible for customers to run container services across clouds, both public and private.  We expect this trend to continue and for AWS to release more services that allow customers to leverage their AWS investment while still enabling previous investments in on-premise infrastructure or other clouds.

4. AWS re:Play Party

The end-of-show party is always a good time, and we encourage you not to miss it.  After a week of talking to peers and digesting a lot of technology, it is fun to let loose, play some video games, and listen to some loud electronic music.  We will not try to make a prediction on the headliner, but it will probably be a big-name in the EDM scene.  We’d love to see Aoki throw a cake at some re:Invent party goers, but we will have to wait until December 1st to find out who is performing along with everyone else.

-Chris Nolan, Director of Products

Facebooktwittergoogle_pluslinkedinmailrss

Writing CloudFormation Templates in YAML – A First Look

AWS recently released a new “game changing” feature for CloudFormation Templates –  support for YAML.  I’d like to give a first look at utilizing YAML for CloudFormation Templates (CFTs) and discuss how this feature might be incorporated in the architect and engineer’s toolbox for AWS.  If you’re not familiar with YAML you’ll want to take a look at the guides here.

YAML Support

This is something the AWS community has been begging for, for quite a while.  One of the easiest ways to tell that JSON is not sufficient is the numerous projects that exist to support JSON based templates (Troposphere, Sparkleformation, Terraform, etc).  Now with YAML support we’re getting that much closer to that Infrastructure-as-Code feeling we’ve been missing.  Let’s walk through some sample YAML CFT code and highlight where it has a major impact.  The code samples below are borrowed almost entirely from the AWS UserGuide for CloudFormation.

AWSTemplateFormatVersion: "2010-09-09"

Description: A sample template

Parameters:
  FilePath:
    Description: The path of the  file.
    Type: String
    Default: /home/ec2-user/userdata

Resources:
  MyEC2Instance:
    Type: "AWS::EC2::Instance" # 1 Quotes are unnecessary here - will they always be?
    Properties:
      ImageId: ami-c481fad3    # 2 Quotes removed from the example - still works
      InstanceType: t2.nano
      KeyName: 2ndwatch-sample-keypair
      Tags:                    # 3 Here I switch to using single spaces
       - Key: Role             # 4 Tag list item is inline
         Value: Test Instance
       -                       # 5 Next list item is block
         Key: Owner
         Value: 2ndWatch
      BlockDeviceMappings:     # 6 Switch back to double spaces
        -
          DeviceName: /dev/sdm
          Ebs:
            VolumeType: gp2
            VolumeSize: 10
      UserData:
        Fn::Base64: !Sub |     # No more Fn::Join needed
          #!/bin/bash
          echo "Testing Userdata" > ${FilePath}
          chown ec2-user.ec2-user ${FilePath}

A couple of things you notice in this example are how clean the code looks and the comments.  These are both necessary to make code descriptive and clear.  In the comments I call out a few considerations with the YAML format.  First, in many of the examples AWS provides there are quotes around values that don’t need them.  When I removed them (comment 1 and 2), the CFT still worked.  That said, you may want to codify on quotes/no quotes at the start of your project or for your entire department/company for consistency.  Additionally, as you will notice in my second set of comments, I switch from 2-space to 1-space YAML formatting (comments #3 and #6).  This is “legal” but annoying.  Just as with JSON, you’ll need to set some of your own rules for how the formatting is done to ensure consistency.

Taking a look at the Tags section you’ll see that lists are supported using a hyphen notation.  In the Tags property I’ve displayed two different formats for how a list item may be denoted.  1. There can be a hyphen alone on a line with a “block” underneath (comment #5) or 2. Inline with the hyphen and the rest following after with the same spacing (comment #4).  As before, you’ll want to decide how you want to format lists.  Multiple AWS examples do it in different ways.

Following on to the userdata property, the next thing you’ll notice is the lack of the Fn::Join function.  This makes the creation of userdata scripts very close to the actual script you would run on the server.  In a previous article I gave Terraform high marks for having similar functionality, and now AWS has brought CFT up to par.  The new !Sub notation helps clean up the substitution a bit, too (it’s also available in JSON).  Of course if you miss it, the Fn::Join can still be used like this:

Tags:
- Key: Name
  Value:
    Fn::Join:
    - "-"
    - - !Ref AWS::StackName
      - PrivateRouteTable1

This would produce a tag of Name = StackName-PrivateRouteTable1 just like it did previously in JSON, but we would advise against doing this because the old notation is much less flexible and more prone to error than the new “joinless” formatting. Notice that nested lists are created using two hyphens.

Conversion Tools

In another bit of good news, you can utilize online conversion tools to update your JSON CFTs to YAML.  As you might guess, it will take a bit of cleanup to bring it in line with whatever formatting decisions you’ve made, but it gets you most of the way there without a complete rewrite.  Initial s on simple CFTs ran with no updates required (using http://www.json2yaml.com/).  A second on a 3000-line CFT converted down to 2300 lines of YAML and also ran without needing any updates (YMMV).  This is a big advantage over tools like Terraform where all new templates would have to be built from scratch, particularly since a conversion tool could probably be whipped together in short order.

All in all, this is a great update to the CloudFormation service and demonstrates AWS‘s commitment to pushing the service forward.

If you have questions or need help getting started, please Contact Us at 2nd Watch.

-Coin Graham, Sr Cloud Consultant, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

AWS CFT vs. Terraform: Advantages and Disadvantages

UPDATE:  AWS Cloudformation now supports YAML.  To be sure, this is a huge improvement over JSON in terms of formatting and use of comments.  This will also simplify windows and linux userdata scripts.  So for teams that are just starting with AWS and don’t need any of the additional benefits of Terraform, YAML would be the best place to start.  Existing teams will likely still have a cache of JSON templates that they will need to recreate and should consider whether the other benefits of Terraform warrant a move away from CFT.

If you’re familiar with AWS CloudFormation Templates (CFTs) and how they work but have been considering Terraform, this guide is for you.  This basic guide will introduce you to some of the advantages and disadvantages of Terraform in comparison to CFT to determine if you should investigate further and try it yourself.  If you don’t have a rudimentary familiarity with Terraform, head over to https://www.terraform.io/intro/index.html for a quick overview.

Advantages

Formatting – This is far and away the strongest advantage of Terraform.  JSON is not a coding language, and it shows.  It’s common for CFTs to be 3000 lines long, and most of that is just JSON braces and bracket.  Terraform has a simple (but custom) HCL for creating templates and makes it easy to document and comment your code.  Whole sections can be moved to a folder structure for design and clarity.  This makes your infrastructure feel a bit more like actual code.  Lastly, you won’t need to convert Userdata bash and PowerShell scripts to JSON only to deploy and discover you forgot one last escaping backslash.  Userdata scripts can be written in separate files exactly as you would write them on the server locally.  For an example, here’s a comparison of JSON to Terraform for creating an instance:

Instance in CFT


"StagingInstance": {
  "Type": "AWS::EC2::Instance",
  "Properties": {
    "UserData": {
      "Fn::Base64": {
        "Fn::Join": ["", [
          "#!/bin/bash -v\n",
          "yum update -y aws*\n",
          "yum update --sec-severity=critical -y\n",
          "yum install -y aws-cfn-bootstrap\n",
          "# download data and install file\n",
          "/opt/aws/bin/cfn-init -s ", {
            "Ref": "AWS::StackName"
          }, " -r StagingInstance ",
          "    --region ", {
            "Ref": "AWS::Region"
          },
          " || error_exit 'Failed to run cfn-init'\n"
        ]]
      }
    },
    "SecurityGroupIds": [{
      "Ref": "StagingSecurityGroup"
    }],
    "ImageId": {
      "Ref": "StagingAMI"
    },
    "KeyName": {
      "Ref": "InstancePrivateKeyName"
    },
    "InstanceType": {
      "Ref": "StagingInstanceType"
    },
    "IamInstanceProfile": {
      "Ref": "StagingInstanceProfile"
    },
    "Tags": [{
      "Key": "Name",
      "Value": {
        "Fn::Join": ["-", [
          "staging", {
            "Ref": "AWS::StackName"
          }, "app-instance"
        ]]
      }
    }],
    "SubnetId": {
      "Ref": "PrivateSubnet1"
    }
  }
}

Instance in Terraform


#
Create the staging instance
resource "aws_instance"
"staging" {
  ami = "${var.staging_instance_ami}"
  instance_type =
    "${var.staging_instance_type}"
  subnet_id =
    "${var.private_subnet_id_1}"
  vpc_security_group_ids = [
    "${aws_security_group.staging.id}"
  ]
  iam_instance_profile =
    "${aws_iam_instance_profile.staging.name}"
  key_name =
    "${var.instance_private_key_name}"
  tags {
    Name =
      "staging-${var.stack_name}-instance"
  }
  user_data = "${file("
  instances / staginguserdatascript.sh ")}"
}

Managing State – This is the second advantage for Terraform.  Terraform knows the state of the environment from the last run, so you can run “terraform plan” and see exactly what has changed with the items that Terraform has created.  With an update to a CFT, you only know that an item will be “Modified,” but not how.  At that point you’ll need to audit the modified item and manually compare to the existing CFT to determine what needs to be updated.

Multi-Provider Support – Depending on how you utilize AWS and other providers, this can be a very big deal.  Terraform gives you a centralized location to manage multiple providers.  Maybe your DNS is in Azure but your servers are in AWS.  You could build an ELB and update the Azure DNS all in the same run.  Or maybe you want to update your AWS infrastructure and also update your DataDog monitoring too.  If you needed a provider they didn’t have, you could presumably add it since the code is open source.

Short learning curve – While they did introduce custom formatting for Terraform templates, the CFT and API nomenclature is *mostly* preserved.  For example, when creating an instance in CFT you need an InstanceType and KeyName. In Terraform this is instance_type and key_name.  Words are separated by underscores and all lowercase.  This makes it somewhat easy to migrate existing CFTs.  All told, it took about a day of experimentation with Terraform to feel comfortable.

Open Source – The general terraform tool is open source, which brings all the good and bad to the table that you normally associate with open source.  As mentioned previously, if you have GoLang resources, the world is your oyster.  Terraform can be made to do whatever you want it to do, and adding back to the repository will enhance it for everyone else.  You can check out the git repo to see that it has pretty active development.

Challenges

Cost – The free version of Terraform is free, but the enterprise version is expensive.  Of course the enterprise version adds a lot of bells and whistles, but I would recommend doing a serious evaluation to determine if they are worth the cost.

No Rollback – Rolling back a CFT deployment or upgrade is sometimes a blessing and sometimes a curse, but with CFT at least you have an option.  With Terraform, there is never an automatic rollback.  You have to figure out what went wrong and plow forward, or first rollback your code then re-deploy.  Either way it can be messy.  However, rollback for AWS CFT can be messy too.  Especially when changes are introduced that make CFT deployment and reconfiguration incompatible.  This invariably leads to the creation of an AWS support ticket to make adjustments to the CFT that is not possible otherwise.

CFT is “tightly coupled” with AWS, while Terraform is not.  This is the YANG to the open source YIN.  Amazon has a dedicated team to continue to improve and update CFTs.  They won’t just focus on the most popular items and will have access to internal resources to vet and prove out their approach.

Conclusion

While this article only scratches the surface of the differences between utilizing AWS CFT and Terraform, it provides a good starting point when evaluating both.  If you’re looking for a better “infrastructure as code,” state management, or multi-provider support, Terraform is definitely worth a look.  We are here to help our customers, so if you need help developing a cloud-first strategy, contact us here.

-Coin Graham, Sr Cloud Consultant, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

A Step-by-Step Guide on Using AWS Lambda-Backed Custom Resources with Amazon CFTs

Amazon CloudFormation Template (CFT) custom resources allow for additional flexibility in building Amazon environments. These can be utilized in a number of ways to enhance automation and make deployment easier. Generally speaking you’ll want to engage custom resources when you need information that is normally not available to the CloudFormation Template in order to complete the processing of the template. This could be a Security Group in another account, or the most updated AMI, or a Spot Price analysis. Additionally it’s useful for creating functionality in CFT that doesn’t exist yet, like verifying that the database size you’ve chosen is valid, or checking if you have a valid password (our example). You won’t want to use it for anything “one-off” as it takes time to develop and process. You will also want to avoid using it for long running processes since AWS CloudFormation will timeout if the internal processes take too long.

To give you an easy example of how this is setup, we’re going to build an AWS Lambda-backed custom resource that will verify that a password is correct by having you type it in twice. If the passwords you type don’t match, the CFT will quit processing and rollback. This is a bit of functionality that’s missing from AWS CFT and can be very frustrating once your environment is deployed and you realize you fat fingered the password parameter. The basic areas we’ll be focusing on are AWS CloudFormation and AWS Lambda. This guide assumes you’re familiar with both of these already, but if you’re not, learn more about AWS Lambda here or AWS CFTs here.

You want to start with the CFT that you’re looking to add the custom resource to and make sure it is functional. It’s always best to start from a place of known good. Adding a Lambda-backed custom resource to a CFT consists of four basic parts:

1. IAM Role for Lambda Execution: This is the role that will be assigned to your Lambda function. You will utilize this role to give the lambda permissions to other parts of AWS as necessary. If you don’t need to add any permissions, just create a role that allows Lambda to write its logs out.

"LambdaExecutionRole": {
  "Type": "AWS::IAM::Role",
  "Properties": {
    "AssumeRolePolicyDocument": {
      "Version": "2012-10-17",
      "Statement": [{
        "Effect": "Allow",
        "Principal": {
          "Service": ["lambda.amazonaws.com"]
        },
        "Action": ["sts:AssumeRole"]
      }]
    },
    "Policies": [{
      "PolicyName": "lambdalogtocloudwatch",
      "PolicyDocument": {
        "Version": "2012-10-17",
        "Statement": [{
          "Effect": "Allow",
          "Action": ["logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"],
          "Resource": "arn:aws:logs:*:*:*"
        }]
      }
    }]
  }
}

2. The Lambda Function: There are two ways of introducing your Lambda function into your CFT. If it is small, you can embed your function directly into the CFT by using the “ZipFile”
option under the “Code” property of the “AWS::Lambda::Function” resource. Or you can use the “S3Bucket” option and reference an S3 bucket that has your code already present in zip
format. Note that if you use the S3 bucket option the user that deploys the CFT will need permissions to read from the bucket, not the Lambda function. Next you’ll set your “Handler,”
“Runtime,” “Timeout,” and “Role” (which should reference the ARN of the role you created previously). If you are using the ZipFile option, your handler is the default for the runtime.

"CheckPasswordsFunction": {
  "Type": "AWS::Lambda::Function",
  "Properties": {
    "Code": {
      "ZipFile": {
        "Fn::Join": ["\n", [
          "var response = require('cfn-response');",
          "exports.handler = function(event, context) {",
          " if (event.RequestType == 'Delete') {",
          " response.send(event, context, response.SUCCESS);",
          " return;", " }",
          " var password = event.ResourceProperties.Password;",
          " var confpassword = event.ResourceProperties.ConfirmPassword;",
          " var responseData = {};",
          " if (password == confpassword) {",
          " responseData = {'passwordcheck': 'Password Valid!'};",
          " response.send(event, context, response.SUCCESS, responseData);",
          " } else {",
          " responseData = {Error: 'Passwords do not match'};",
          " console.log(responseData.Error);",
          " responseData = {'passwordcheck': 'Password Invalid!'};",
          " response.send(event, context, response.FAILED, responseData);",
          " }", "};"
        ]]
      }
    },
    "Handler": "index.handler",
    "Runtime": "nodejs",
    "Timeout": "30",
    "Role": {
      "Fn::GetAtt": [
        "LambdaExecutionRole", "Arn"
      ]
    }
  }
}

3. The Lambda Callout: The Lambda callout is where you pass the variables from the CFT to your Lambda function. It’s important to name these appropriately and consider what effect case and naming conventions will have on the runtime you’re using. The “Service Token” property is the ARN of the Lambda function you just created and the rest of the properties are the variables you’re passing through.

"TestPasswords": {
  "Type": "Custom::LambdaCallout",
  "Properties": {
    "ServiceToken": {
      "Fn::GetAtt": [
        "CheckPasswordsFunction",
        "Arn"
      ]
    },
    "Password": {
      "Ref": "Password"
    },
    "ConfirmPassword": {
      "Ref": "ConfirmPassword"
    }
  }
}

4. The Response: There are two key parts of the response from the custom resources and this applies to non-Lambda custom resources too. The first is the “Status” of the response. If you return a status of “FAILED,” the CFT will short circuit and rollback. If you return a status of “SUCCESS,” then the CFT will continue to process. This is important because sometimes you’ll want to send SUCCESS even if your lambda didn’t produce the desired result. In the case of our PassCheck, we wanted to stop the CFT from moving forward to save time. Knowing at the end that the password were mismatched would not be very valuable. The second important piece of the response is the “Data.” This is how you pass information back to CloudFormation to process the result. You’ll set the “Data” variable in your code as a json and reference the json key/value pair back inside the CFT. You’ll use the “Fn::GetAtt” option to reference the Lambda callout you created previously and the key of the json data you’re interested in.

"Outputs": {
  "Results": {
    "Description": "Test Passwords Result",
    "Value": {
      "Fn::GetAtt": ["TestPasswords",
        "passwordcheck"
      ]
    }
  }
}

As far as your Lambda function is concerned, you may or may not need to reference variables sent from the CloudFormation Template. These variables will be in the “event”->”ResourceProperties” dictionary/hash. For example:

NodeJs

var password = event.ResourceProperties.Password

Python

password = event['ResourceProperties']['Password']

And similarly, once you’re function is completed processing you might need to send a response back to the CFT. Thankfully AWS has created some wrappers to make the response easier. For nodejs it is called “cfn-response” but is only available when using the “ZipFile” option. There is a similar package for Python, but you’ll need to bundle it with your Lambda and deploy from S3. Sending information back from your Lambda is as easy as setting the “Data” variable to a properly formatted json file and sending it back.

...
if (password == confpassword) {
responseData = {'passwordcheck': 'Password Valid!'};
response.send(event, context, response.SUCCESS, responseData);
...

That’s it. Creating a Lambda-backed custom resource can add all kinds of additional functionality and options to your CloudFormation Templates. Feel free to download the whole CFT here and it out or use it to learn more, or Contact Us at 2nd Watch to help in getting started.

-Coin Graham, Sr Cloud Engineer, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

"Taste the Feeling" of Innovation

Coca-Cola North America Information Technology Leads the Pack

On May 4th 2016, Coca-Cola North America Information Technology and Splunk Inc. (NASDAQ: SPLK), provider of the leading software platform for real-time Operational Intelligence, announced that Coca-Cola North America Information Technology was named to this year’s InformationWeek Elite 100, a list of the top business technology innovators in the United States. Coca-Cola North America Information Technology, a Splunk customer, is being honored for the company’s marketing transformation initiative.

Coca-Cola North America Information Technology division is a leader in migrating to the cloud and leveraging cloud native technologies.  The division re-architected its digital marketing platform to leverage cloud technology to create business insights and flexibility and to take advantage of scale and innovations of the public cloud offered by Amazon Web Services.

“The success you see from our digital marketing transformation is due to our intentional focus on innovation and agility as well as results, our team’s ingenuity and our partnership with top technology companies like Splunk,” said Michelle Routh, chief information officer, Coca-Cola North America. “We recognized a chance for IT to collaborate much more closely with the marketing arm of Coca-Cola North America to bring an unparalleled digital marketing experience to our business and our customers. We have moved technologies to the cloud to scale our campaigns, used big data analytics, beacon and Internet of Things technologies to provide our customers with unique, tailored experiences.”

Coca-Cola North America Information Technology is one of the most innovative customers we have seen today. They are able to analyze data that was previously not available to them through the use of Splunk® Enterprise software. Business insights include trending flavor mixes, usage data and geographical behavior on its popular Freestyle machines to help improve fulfillment and marketing offers.

We congratulate both Coca-Cola North America Information Technology division and Splunk for being named in InformationWeek Elite 100! Read more

-Jeff Aden, EVP Strategic Business Development & Marketing, Co-Founder

Facebooktwittergoogle_pluslinkedinmailrss

Why agile is fragile and how MSPs can help

Midmarket and enterprise companies looking to transform their IT operations to new models based on the public cloud and Agile/DevOps have a long, arduous journey. Moving from internally-managed IT departments with predictable needs, to ones which must be flexible and run on-demand is one of the grea paradigm shifts for CIOs today.

It requires new skills and new ways of working including a fundamental reorganisation of IT organisations. Meanwhile, IT must continue with business as usual, supporting core systems and processes for productivity and operations.

Many companies can’t get there fast enough, which is why the market for service providers specialising in public cloud infrastructure and DevOps is growing. A new crop of MSPs that focus specifically on public cloud infrastructure has appeared in the last few years to address the specific needs of public cloud as it relates to migration, legacy systems, integration, provisioning and configuration, security and financial management.

IT organisations are moving on from ing and prototyping to launching production applications in the cloud; there is often not enough time to ramp up quickly in the new capabilities needed for success. Here’s a look at how MSPs can ease the pain of enterprise public cloud and DevOps initiatives:

Network management:
In the public cloud, network management is handled differently due to the differences in the actual network environment and the fact that you just don’t have the same level of visibility and control as you do in your own data center. MSPs can help by lending a hand of expertise in building and managing secure networks in the public cloud.

Without the help of an MSP your business will need to do their own homework on how the network works and what tools are effective on the security side of things: heads up, it’s a much different list than the traditional data center. What do you get out of the box? When do you need third-party software to help? MSPs have years of experience running production workloads in the public cloud and can help you make the right decisions the first time without going through an exhaustive discovery phase.

Design and architecture:
Deploying systems into the cloud requires a mental shift, due to the elastic nature of virtual resources. This reinvents infrastructure design, since instances come and go according to demand and performance needs. IT needs to understand how to automate infrastructure changes according to shifting requirements and risks, such as hardware failures and security configurations. Experienced service providers that have helped companies migrate to the cloud over and again can deliver best practices and reduce risks.

Workflow:
Cloud and DevOps go hand-in-hand due to the joint requirements of frequent iteration, rapid change and continuous integration/development. The processes and tools for CI and CD are still emerging. Doing this well requires not only new, collaborative workflows but working with unfamiliar technologies such as containers.

While AWS has released a new service for managing containers, that’s just one piece of the puzzle. Many companies moving toward DevOps benefit from outside help in training, planning, measuring results and navigating internal barriers to change. Lastly, the automation infrastructure itself (Puppet, Chef, others) requires maintenance and is critical in the security landscape. An MSP can help build and manage this infrastructure so that you can focus on your code.

Security:
Security in the cloud is a shared responsibility. Many customers incorrectly assume that because public cloud providers have excellent security records and deep compliance frameworks for PCI and other regulations, that their infrastructure is secure by default. The reality is that providers do an excellent job of securing the underlying infrastructure but that is where things stop for them and begin for you as a customer.

Most security issues found in the public cloud today relate to misconfigurations.track configuration changes and validate architectural designs against them. In DevOps, rapid development processes may inadvertently trump security, and using containers and micro-services to speed deployment also introduces security risks. Missteps in the area of security can be long and costly to fix later; an MSP can help mitigate that risk through upfront design and ongoing monitoring and management.

Provisioning and cost management:
Virtual sprawl is no myth. IT teams that for years have used over-provisioning as a stopgap measure to ensure uptime may struggle to adapt to a different approach using on-demand infrastructure. Experts can help make that transition through proper provisioning at the outset as well as applying spend management tools built for the cloud to monitor and predict usage.

One of the best features of public cloud providers is high elasticity, the ability to spin up large amounts of virtual instances at a moment’s notice and then shut them off when you are done using them. The trick here is to remember to shut them off: many development teams claim to work 24×7 but the reality is usually much different. An MSP can set up cost alerting and monitoring and can even leverage tools to help you allocate costs to your heavy users or business units.

Legacy systems:
Large companies often want to move legacy systems to the public cloud to reduce the costly overhead of storage and maintenance. Yet no CIO wants to be accountable for migrating a mission-critical legacy system which later doesn’t perform well or is out of compliance.

Service providers can help evaluate whether a system can be migrated as is, “lift and shift,” or needs to be reconfigured to run in the cloud. CIOs may lean toward handling this task with their internal teams, yet doing so will likely take longer and require significant retraining of staff. There’s also the need to pay close attention to compliance. Experienced MSPs can help navigate financial regulations (Sarbanes-Oxley, PCI), privacy laws (HIPAA) and data management regulations in some sectors that go against the grain of DevOps.

Operations management:
Most IT infrastructure managers whom have been around for a while are well-versed in VMware-specific tools such as Vsphere. Yet unfortunately, most of those operational tools made to support virtualisation software don’t work well, or at all, in the public cloud. There are some cloud-native management tools available now, including those from AWS, yet none of them are clear winners yet.

IT departments are stuck with patching together their own toolsets or developing them from scratch, such as Netflix has done. That’s not always the best use of time and money, depending on your sector. MSPs can take over the operations management function altogether. Customers benefit through the continual learning on industry best practices that the service provider must undertake to effectively manage dozens or hundreds of customers.

Culture clash:
As with any disruptive technology, people are the biggest barrier to change. While human beings are highly adaptable, many of us simply are not comfortable with change. Take a hard look at not just skills but your culture. Do you have the type of organisation where people are willing and able to adapt without threatening to quit? If not, using the services of an MSP might be the path of least friction. Some organisations simply want the benefits of new technologies without needing to understand nor manage every nook and cranny.

Beyond all the above advantages, the MSP partner helps IT organisations move faster by serving as a knowledgeable extension of the IT department. CIOs and their teams can focus on serving the business and its evolving requirements, while the MSP helps ensure that those requirements transition well to the public cloud. Executives who have decided that the public cloud is their future and that DevOps is the way to get there are progressive thinkers whom are unafraid to take risks.

Yet that doesn’t mean they should go it alone. Find a partner who you can trust, and move toward your future with an experienced team propping you up all the way. The old adage of “you won’t get fired for hiring <name legacy service provider>” has now changed to “My new MSP got me promoted.”

-Kris Bliesner, CTO

This article was first published on ITProPortal on 11/25/15.

Facebooktwittergoogle_pluslinkedinmailrss