1-888-317-7920 info@2ndwatch.com

Managing AWS Billing

Without a doubt, AWS has fundamentally changed how modern enterprises deploy IT infrastructure.  Their services are flexible, cost effective, scalable, secure and reliable. And while moving from on-premise data centers to the cloud is, in most cases, the smart move; once there managing your costs becomes much more complex.

On-premise costs are straight forward, enterprises purchase servers and amortize their costs over the expected life.  Shared services such as internet access, racks, power and cooling are proportionally allocated to the cost of each server. AWS on the other hand, invoices each usage type separately.  For example, if you are running a basic EC2 instance, you will not only be charged for the EC2 box usage but also the data transfer, EBS Storage and associated snapshots. You could end up with as many as 13 line items of cost for a single EC2.

Example: Pricing line items for a single c4.xlarge Linux virtual machine running in the US East Region (Click on image to view larger)

Linux Example_Managing AWS Billing

When examining the composition of various workload types the numbers of line items to manage will vary. A traditional VM-based workload may have 50 cost line items for every $1,000 of spend while an agile, cloud-native workload may have as many as 500 per $1,000 and a dynamic workload leveraging spot instances may have upwards of 1,200 per $1,000. This “parts bin” approach to pricing makes the job of cost account challenging.

To address this complexity and enable accurate cost accounting of your cloud costs; we recommend creating a business-relevant financial tagging schema to organize your resources and associated cost line items based on your specific financial accounting structure.

Here are some recommended financial management tags you should consider (Click on image to view larger):

Financial Management Tags_AWS Billing Management

AWS Tagging data integrity is extremely important in ensuring the quality of the information it provides and is directly dependent upon the rigor applied in adopting a systematic and disciplined approach to AWS Tagging.

Financial Management Tagging – Best Practices

  • Create a framework or standard for your enterprise that outlines required tag names, tag formatting rules, and governance of tags.
  • Tags should be enforced and automated at startup of the resource via Cloud Formation templates or other infrastructure as code tools, such as Terraform, to ensure cost accounting details are captures from time of launch.
    • NOTE:  Tags are point in time based.  If a resource is launched without being tagged and then tagged sometime in the future, all hours the resource ran prior to being tagged will not be included in tag reports in the AWS console.
  • Manually creating tags and associated values is strongly discouraged as it leads to miss-tagged and untagged resources and in-accurate cost accounting
  • Select all upper case or all lower-case keys and values to avoid discrepancies with capitalization.
    • NOTE: “Production” and “production” are considered two different tag names or values.
  • Monitor resources with AWS Config Rules and alert for newly created resources that are not tagged

Once your tagging schema is created, automation is in place to tag resources during startup and alerts are set up to ensure tagging is managed, you can accurately to view, track and report your cost and usage using any of your tagging dimensions.

Financial Management Reporting – Best Practices

  • Using your tagging schema, group your resources by workload.
  • Apply Reserved Instance discounts to the workloads you intended them to be for.
    • NOTE: 2nd Watch’s CMP Finance Manager tool converts reserved instances into resources so that you can add them to the workload they were intended for.
  • Organize your groups to match your specific multi-level financial reporting structure.
  • Managed shared resources
    • Create groups for shared resources. If you have resources that are shared across multiple workloads such as a database used my multiple applications or virtual machines with more than one applications running on it, create groups to capture these costs and allocate them proportionally to the applications using them.
  • Manage un-taggable resources
    • Create a group for un-taggable resources. Some AWS resources are not taggable and should be grouped together and their associated costs proportionally allocated to all applications.
  • Manage spend to budget
    • Create budgets and budget alerts for each group to ensure you stay in budget throughout the year.
    • Key alerts
      • Forecasted month end cost exceeds alert threshold
      • MTD cost is over alert threshold
      • Forecasted year end cost exceeds alert threshold
      • YTD cost is over alert threshold
    • Sign up to receive monthly cost and usage reports for integration into your internal cost accounting system.
      • Cost by application, environment, business unit etc.

 

Even though AWS’ “parts bin” approach to pricing is complicated, following these guidelines will help ensure accurate cost accounting of your cloud spend.

 

–Timothy Hill, Senior Product Manager, 2nd Watch

 

 

Facebooktwittergoogle_pluslinkedinmailrss

4 Things to Watch for at AWS re:Invent 2016

2nd Watch is always buzzing before AWS re:Invent as we get ready for a great conference full of breakout sessions, boot camps, certifications, keynotes and – you can count on it – some big AWS announcements.  As a Platinum sponsor we are committed to the success of the conference and enjoy having the opportunity to connect with clients and prospective clients.  We are also excited to hear what new services and features AWS will be launching to make the public cloud better, faster and easier to adopt.  Here are the three areas where we expect AWS to make some announcements:

1. Services and Features to Drive Further Enterprise Adoption

Cloud adoption has moved beyond the startups towards Enterprise adoption.  The majority of Fortune 500 companies have adopted the cloud in some form or another with more and more moving to a cloud-first approach.  There is still a very large portion of enterprise workloads that live in datacenters and colocation facilities.  AWS will continue to add services to enable customers to run these workloads in the cloud.  In the past we have seen AWS CloudTrail, AWS Config, and other services that enable governance and security.  We expect this to continue and will be watching out for services that provide the governance and security that Enterprises need.

2. Software Development Life Cycle Services

AWS has released a number of tools that make it easier for developers and operations people to manage the software development and deployment process.  CodeCommit, CodePipeline, and CodeDeploy can be included in CloudFormation templates to enable Infrastructure as Code to include the tools necessary to enable development teams.  This is a trend we expect to continue with AWS doing more in this space.  The API-native services and features of AWS have always made it popular with folks who write code.  We expect this trend to continue and for AWS to further enable developers with new services and tools.

3. Beyond AWS

We were excited to see AWS reach beyond themselves to offer Amazon Linux as a container image.  Previously the tools and services that ran outside of the cloud enabled customers to bring workloads to the cloud.  Those include AWS Application Discovery Service, Amazon Storage Gateway, AWS Snowball, and AWS Database Migration Service.  These services and tools all exist to enable customers to come to the cloud.  Offering their operating system as a Linux Container makes it possible for customers to run container services across clouds, both public and private.  We expect this trend to continue and for AWS to release more services that allow customers to leverage their AWS investment while still enabling previous investments in on-premise infrastructure or other clouds.

4. AWS re:Play Party

The end-of-show party is always a good time, and we encourage you not to miss it.  After a week of talking to peers and digesting a lot of technology, it is fun to let loose, play some video games, and listen to some loud electronic music.  We will not try to make a prediction on the headliner, but it will probably be a big-name in the EDM scene.  We’d love to see Aoki throw a cake at some re:Invent party goers, but we will have to wait until December 1st to find out who is performing along with everyone else.

-Chris Nolan, Director of Products

Facebooktwittergoogle_pluslinkedinmailrss

Writing CloudFormation Templates in YAML – A First Look

AWS recently released a new “game changing” feature for CloudFormation Templates –  support for YAML.  I’d like to give a first look at utilizing YAML for CloudFormation Templates (CFTs) and discuss how this feature might be incorporated in the architect and engineer’s toolbox for AWS.  If you’re not familiar with YAML you’ll want to take a look at the guides here.

YAML Support

This is something the AWS community has been begging for, for quite a while.  One of the easiest ways to tell that JSON is not sufficient is the numerous projects that exist to support JSON based templates (Troposphere, Sparkleformation, Terraform, etc).  Now with YAML support we’re getting that much closer to that Infrastructure-as-Code feeling we’ve been missing.  Let’s walk through some sample YAML CFT code and highlight where it has a major impact.  The code samples below are borrowed almost entirely from the AWS UserGuide for CloudFormation.

AWSTemplateFormatVersion: "2010-09-09"

Description: A sample template

Parameters:
  FilePath:
    Description: The path of the  file.
    Type: String
    Default: /home/ec2-user/userdata

Resources:
  MyEC2Instance:
    Type: "AWS::EC2::Instance" # 1 Quotes are unnecessary here - will they always be?
    Properties:
      ImageId: ami-c481fad3    # 2 Quotes removed from the example - still works
      InstanceType: t2.nano
      KeyName: 2ndwatch-sample-keypair
      Tags:                    # 3 Here I switch to using single spaces
       - Key: Role             # 4 Tag list item is inline
         Value: Test Instance
       -                       # 5 Next list item is block
         Key: Owner
         Value: 2ndWatch
      BlockDeviceMappings:     # 6 Switch back to double spaces
        -
          DeviceName: /dev/sdm
          Ebs:
            VolumeType: gp2
            VolumeSize: 10
      UserData:
        Fn::Base64: !Sub |     # No more Fn::Join needed
          #!/bin/bash
          echo "Testing Userdata" > ${FilePath}
          chown ec2-user.ec2-user ${FilePath}

A couple of things you notice in this example are how clean the code looks and the comments.  These are both necessary to make code descriptive and clear.  In the comments I call out a few considerations with the YAML format.  First, in many of the examples AWS provides there are quotes around values that don’t need them.  When I removed them (comment 1 and 2), the CFT still worked.  That said, you may want to codify on quotes/no quotes at the start of your project or for your entire department/company for consistency.  Additionally, as you will notice in my second set of comments, I switch from 2-space to 1-space YAML formatting (comments #3 and #6).  This is “legal” but annoying.  Just as with JSON, you’ll need to set some of your own rules for how the formatting is done to ensure consistency.

Taking a look at the Tags section you’ll see that lists are supported using a hyphen notation.  In the Tags property I’ve displayed two different formats for how a list item may be denoted.  1. There can be a hyphen alone on a line with a “block” underneath (comment #5) or 2. Inline with the hyphen and the rest following after with the same spacing (comment #4).  As before, you’ll want to decide how you want to format lists.  Multiple AWS examples do it in different ways.

Following on to the userdata property, the next thing you’ll notice is the lack of the Fn::Join function.  This makes the creation of userdata scripts very close to the actual script you would run on the server.  In a previous article I gave Terraform high marks for having similar functionality, and now AWS has brought CFT up to par.  The new !Sub notation helps clean up the substitution a bit, too (it’s also available in JSON).  Of course if you miss it, the Fn::Join can still be used like this:

Tags:
- Key: Name
  Value:
    Fn::Join:
    - "-"
    - - !Ref AWS::StackName
      - PrivateRouteTable1

This would produce a tag of Name = StackName-PrivateRouteTable1 just like it did previously in JSON, but we would advise against doing this because the old notation is much less flexible and more prone to error than the new “joinless” formatting. Notice that nested lists are created using two hyphens.

Conversion Tools

In another bit of good news, you can utilize online conversion tools to update your JSON CFTs to YAML.  As you might guess, it will take a bit of cleanup to bring it in line with whatever formatting decisions you’ve made, but it gets you most of the way there without a complete rewrite.  Initial s on simple CFTs ran with no updates required (using http://www.json2yaml.com/).  A second on a 3000-line CFT converted down to 2300 lines of YAML and also ran without needing any updates (YMMV).  This is a big advantage over tools like Terraform where all new templates would have to be built from scratch, particularly since a conversion tool could probably be whipped together in short order.

All in all, this is a great update to the CloudFormation service and demonstrates AWS‘s commitment to pushing the service forward.

If you have questions or need help getting started, please Contact Us at 2nd Watch.

-Coin Graham, Sr Cloud Consultant, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

AWS CFT vs. Terraform: Advantages and Disadvantages

UPDATE:  AWS Cloudformation now supports YAML.  To be sure, this is a huge improvement over JSON in terms of formatting and use of comments.  This will also simplify windows and linux userdata scripts.  So for teams that are just starting with AWS and don’t need any of the additional benefits of Terraform, YAML would be the best place to start.  Existing teams will likely still have a cache of JSON templates that they will need to recreate and should consider whether the other benefits of Terraform warrant a move away from CFT.

If you’re familiar with AWS CloudFormation Templates (CFTs) and how they work but have been considering Terraform, this guide is for you.  This basic guide will introduce you to some of the advantages and disadvantages of Terraform in comparison to CFT to determine if you should investigate further and try it yourself.  If you don’t have a rudimentary familiarity with Terraform, head over to https://www.terraform.io/intro/index.html for a quick overview.

Advantages

Formatting – This is far and away the strongest advantage of Terraform.  JSON is not a coding language, and it shows.  It’s common for CFTs to be 3000 lines long, and most of that is just JSON braces and bracket.  Terraform has a simple (but custom) HCL for creating templates and makes it easy to document and comment your code.  Whole sections can be moved to a folder structure for design and clarity.  This makes your infrastructure feel a bit more like actual code.  Lastly, you won’t need to convert Userdata bash and PowerShell scripts to JSON only to deploy and discover you forgot one last escaping backslash.  Userdata scripts can be written in separate files exactly as you would write them on the server locally.  For an example, here’s a comparison of JSON to Terraform for creating an instance:

Instance in CFT


"StagingInstance": {
  "Type": "AWS::EC2::Instance",
  "Properties": {
    "UserData": {
      "Fn::Base64": {
        "Fn::Join": ["", [
          "#!/bin/bash -v\n",
          "yum update -y aws*\n",
          "yum update --sec-severity=critical -y\n",
          "yum install -y aws-cfn-bootstrap\n",
          "# download data and install file\n",
          "/opt/aws/bin/cfn-init -s ", {
            "Ref": "AWS::StackName"
          }, " -r StagingInstance ",
          "    --region ", {
            "Ref": "AWS::Region"
          },
          " || error_exit 'Failed to run cfn-init'\n"
        ]]
      }
    },
    "SecurityGroupIds": [{
      "Ref": "StagingSecurityGroup"
    }],
    "ImageId": {
      "Ref": "StagingAMI"
    },
    "KeyName": {
      "Ref": "InstancePrivateKeyName"
    },
    "InstanceType": {
      "Ref": "StagingInstanceType"
    },
    "IamInstanceProfile": {
      "Ref": "StagingInstanceProfile"
    },
    "Tags": [{
      "Key": "Name",
      "Value": {
        "Fn::Join": ["-", [
          "staging", {
            "Ref": "AWS::StackName"
          }, "app-instance"
        ]]
      }
    }],
    "SubnetId": {
      "Ref": "PrivateSubnet1"
    }
  }
}

Instance in Terraform


#
Create the staging instance
resource "aws_instance"
"staging" {
  ami = "${var.staging_instance_ami}"
  instance_type =
    "${var.staging_instance_type}"
  subnet_id =
    "${var.private_subnet_id_1}"
  vpc_security_group_ids = [
    "${aws_security_group.staging.id}"
  ]
  iam_instance_profile =
    "${aws_iam_instance_profile.staging.name}"
  key_name =
    "${var.instance_private_key_name}"
  tags {
    Name =
      "staging-${var.stack_name}-instance"
  }
  user_data = "${file("
  instances / staginguserdatascript.sh ")}"
}

Managing State – This is the second advantage for Terraform.  Terraform knows the state of the environment from the last run, so you can run “terraform plan” and see exactly what has changed with the items that Terraform has created.  With an update to a CFT, you only know that an item will be “Modified,” but not how.  At that point you’ll need to audit the modified item and manually compare to the existing CFT to determine what needs to be updated.

Multi-Provider Support – Depending on how you utilize AWS and other providers, this can be a very big deal.  Terraform gives you a centralized location to manage multiple providers.  Maybe your DNS is in Azure but your servers are in AWS.  You could build an ELB and update the Azure DNS all in the same run.  Or maybe you want to update your AWS infrastructure and also update your DataDog monitoring too.  If you needed a provider they didn’t have, you could presumably add it since the code is open source.

Short learning curve – While they did introduce custom formatting for Terraform templates, the CFT and API nomenclature is *mostly* preserved.  For example, when creating an instance in CFT you need an InstanceType and KeyName. In Terraform this is instance_type and key_name.  Words are separated by underscores and all lowercase.  This makes it somewhat easy to migrate existing CFTs.  All told, it took about a day of experimentation with Terraform to feel comfortable.

Open Source – The general terraform tool is open source, which brings all the good and bad to the table that you normally associate with open source.  As mentioned previously, if you have GoLang resources, the world is your oyster.  Terraform can be made to do whatever you want it to do, and adding back to the repository will enhance it for everyone else.  You can check out the git repo to see that it has pretty active development.

Challenges

Cost – The free version of Terraform is free, but the enterprise version is expensive.  Of course the enterprise version adds a lot of bells and whistles, but I would recommend doing a serious evaluation to determine if they are worth the cost.

No Rollback – Rolling back a CFT deployment or upgrade is sometimes a blessing and sometimes a curse, but with CFT at least you have an option.  With Terraform, there is never an automatic rollback.  You have to figure out what went wrong and plow forward, or first rollback your code then re-deploy.  Either way it can be messy.  However, rollback for AWS CFT can be messy too.  Especially when changes are introduced that make CFT deployment and reconfiguration incompatible.  This invariably leads to the creation of an AWS support ticket to make adjustments to the CFT that is not possible otherwise.

CFT is “tightly coupled” with AWS, while Terraform is not.  This is the YANG to the open source YIN.  Amazon has a dedicated team to continue to improve and update CFTs.  They won’t just focus on the most popular items and will have access to internal resources to vet and prove out their approach.

Conclusion

While this article only scratches the surface of the differences between utilizing AWS CFT and Terraform, it provides a good starting point when evaluating both.  If you’re looking for a better “infrastructure as code,” state management, or multi-provider support, Terraform is definitely worth a look.  We are here to help our customers, so if you need help developing a cloud-first strategy, contact us here.

-Coin Graham, Sr Cloud Consultant, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

A Step-by-Step Guide on Using AWS Lambda-Backed Custom Resources with Amazon CFTs

Amazon CloudFormation Template (CFT) custom resources allow for additional flexibility in building Amazon environments. These can be utilized in a number of ways to enhance automation and make deployment easier. Generally speaking you’ll want to engage custom resources when you need information that is normally not available to the CloudFormation Template in order to complete the processing of the template. This could be a Security Group in another account, or the most updated AMI, or a Spot Price analysis. Additionally it’s useful for creating functionality in CFT that doesn’t exist yet, like verifying that the database size you’ve chosen is valid, or checking if you have a valid password (our example). You won’t want to use it for anything “one-off” as it takes time to develop and process. You will also want to avoid using it for long running processes since AWS CloudFormation will timeout if the internal processes take too long.

To give you an easy example of how this is setup, we’re going to build an AWS Lambda-backed custom resource that will verify that a password is correct by having you type it in twice. If the passwords you type don’t match, the CFT will quit processing and rollback. This is a bit of functionality that’s missing from AWS CFT and can be very frustrating once your environment is deployed and you realize you fat fingered the password parameter. The basic areas we’ll be focusing on are AWS CloudFormation and AWS Lambda. This guide assumes you’re familiar with both of these already, but if you’re not, learn more about AWS Lambda here or AWS CFTs here.

You want to start with the CFT that you’re looking to add the custom resource to and make sure it is functional. It’s always best to start from a place of known good. Adding a Lambda-backed custom resource to a CFT consists of four basic parts:

1. IAM Role for Lambda Execution: This is the role that will be assigned to your Lambda function. You will utilize this role to give the lambda permissions to other parts of AWS as necessary. If you don’t need to add any permissions, just create a role that allows Lambda to write its logs out.

"LambdaExecutionRole": {
  "Type": "AWS::IAM::Role",
  "Properties": {
    "AssumeRolePolicyDocument": {
      "Version": "2012-10-17",
      "Statement": [{
        "Effect": "Allow",
        "Principal": {
          "Service": ["lambda.amazonaws.com"]
        },
        "Action": ["sts:AssumeRole"]
      }]
    },
    "Policies": [{
      "PolicyName": "lambdalogtocloudwatch",
      "PolicyDocument": {
        "Version": "2012-10-17",
        "Statement": [{
          "Effect": "Allow",
          "Action": ["logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"],
          "Resource": "arn:aws:logs:*:*:*"
        }]
      }
    }]
  }
}

2. The Lambda Function: There are two ways of introducing your Lambda function into your CFT. If it is small, you can embed your function directly into the CFT by using the “ZipFile”
option under the “Code” property of the “AWS::Lambda::Function” resource. Or you can use the “S3Bucket” option and reference an S3 bucket that has your code already present in zip
format. Note that if you use the S3 bucket option the user that deploys the CFT will need permissions to read from the bucket, not the Lambda function. Next you’ll set your “Handler,”
“Runtime,” “Timeout,” and “Role” (which should reference the ARN of the role you created previously). If you are using the ZipFile option, your handler is the default for the runtime.

"CheckPasswordsFunction": {
  "Type": "AWS::Lambda::Function",
  "Properties": {
    "Code": {
      "ZipFile": {
        "Fn::Join": ["\n", [
          "var response = require('cfn-response');",
          "exports.handler = function(event, context) {",
          " if (event.RequestType == 'Delete') {",
          " response.send(event, context, response.SUCCESS);",
          " return;", " }",
          " var password = event.ResourceProperties.Password;",
          " var confpassword = event.ResourceProperties.ConfirmPassword;",
          " var responseData = {};",
          " if (password == confpassword) {",
          " responseData = {'passwordcheck': 'Password Valid!'};",
          " response.send(event, context, response.SUCCESS, responseData);",
          " } else {",
          " responseData = {Error: 'Passwords do not match'};",
          " console.log(responseData.Error);",
          " responseData = {'passwordcheck': 'Password Invalid!'};",
          " response.send(event, context, response.FAILED, responseData);",
          " }", "};"
        ]]
      }
    },
    "Handler": "index.handler",
    "Runtime": "nodejs",
    "Timeout": "30",
    "Role": {
      "Fn::GetAtt": [
        "LambdaExecutionRole", "Arn"
      ]
    }
  }
}

3. The Lambda Callout: The Lambda callout is where you pass the variables from the CFT to your Lambda function. It’s important to name these appropriately and consider what effect case and naming conventions will have on the runtime you’re using. The “Service Token” property is the ARN of the Lambda function you just created and the rest of the properties are the variables you’re passing through.

"TestPasswords": {
  "Type": "Custom::LambdaCallout",
  "Properties": {
    "ServiceToken": {
      "Fn::GetAtt": [
        "CheckPasswordsFunction",
        "Arn"
      ]
    },
    "Password": {
      "Ref": "Password"
    },
    "ConfirmPassword": {
      "Ref": "ConfirmPassword"
    }
  }
}

4. The Response: There are two key parts of the response from the custom resources and this applies to non-Lambda custom resources too. The first is the “Status” of the response. If you return a status of “FAILED,” the CFT will short circuit and rollback. If you return a status of “SUCCESS,” then the CFT will continue to process. This is important because sometimes you’ll want to send SUCCESS even if your lambda didn’t produce the desired result. In the case of our PassCheck, we wanted to stop the CFT from moving forward to save time. Knowing at the end that the password were mismatched would not be very valuable. The second important piece of the response is the “Data.” This is how you pass information back to CloudFormation to process the result. You’ll set the “Data” variable in your code as a json and reference the json key/value pair back inside the CFT. You’ll use the “Fn::GetAtt” option to reference the Lambda callout you created previously and the key of the json data you’re interested in.

"Outputs": {
  "Results": {
    "Description": "Test Passwords Result",
    "Value": {
      "Fn::GetAtt": ["TestPasswords",
        "passwordcheck"
      ]
    }
  }
}

As far as your Lambda function is concerned, you may or may not need to reference variables sent from the CloudFormation Template. These variables will be in the “event”->”ResourceProperties” dictionary/hash. For example:

NodeJs

var password = event.ResourceProperties.Password

Python

password = event['ResourceProperties']['Password']

And similarly, once you’re function is completed processing you might need to send a response back to the CFT. Thankfully AWS has created some wrappers to make the response easier. For nodejs it is called “cfn-response” but is only available when using the “ZipFile” option. There is a similar package for Python, but you’ll need to bundle it with your Lambda and deploy from S3. Sending information back from your Lambda is as easy as setting the “Data” variable to a properly formatted json file and sending it back.

...
if (password == confpassword) {
responseData = {'passwordcheck': 'Password Valid!'};
response.send(event, context, response.SUCCESS, responseData);
...

That’s it. Creating a Lambda-backed custom resource can add all kinds of additional functionality and options to your CloudFormation Templates. Feel free to download the whole CFT here and it out or use it to learn more, or Contact Us at 2nd Watch to help in getting started.

-Coin Graham, Sr Cloud Engineer, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

"Taste the Feeling" of Innovation

Coca-Cola North America Information Technology Leads the Pack

On May 4th 2016, Coca-Cola North America Information Technology and Splunk Inc. (NASDAQ: SPLK), provider of the leading software platform for real-time Operational Intelligence, announced that Coca-Cola North America Information Technology was named to this year’s InformationWeek Elite 100, a list of the top business technology innovators in the United States. Coca-Cola North America Information Technology, a Splunk customer, is being honored for the company’s marketing transformation initiative.

Coca-Cola North America Information Technology division is a leader in migrating to the cloud and leveraging cloud native technologies.  The division re-architected its digital marketing platform to leverage cloud technology to create business insights and flexibility and to take advantage of scale and innovations of the public cloud offered by Amazon Web Services.

“The success you see from our digital marketing transformation is due to our intentional focus on innovation and agility as well as results, our team’s ingenuity and our partnership with top technology companies like Splunk,” said Michelle Routh, chief information officer, Coca-Cola North America. “We recognized a chance for IT to collaborate much more closely with the marketing arm of Coca-Cola North America to bring an unparalleled digital marketing experience to our business and our customers. We have moved technologies to the cloud to scale our campaigns, used big data analytics, beacon and Internet of Things technologies to provide our customers with unique, tailored experiences.”

Coca-Cola North America Information Technology is one of the most innovative customers we have seen today. They are able to analyze data that was previously not available to them through the use of Splunk® Enterprise software. Business insights include trending flavor mixes, usage data and geographical behavior on its popular Freestyle machines to help improve fulfillment and marketing offers.

We congratulate both Coca-Cola North America Information Technology division and Splunk for being named in InformationWeek Elite 100! Read more

-Jeff Aden, EVP Strategic Business Development & Marketing, Co-Founder

Facebooktwittergoogle_pluslinkedinmailrss