1-888-317-7920 info@2ndwatch.com

Fully Coded And Automated CI/CD Pipelines: The Process

The Why

Originally, I thought I’d give a deep dive into the mechanics of some of our automated work flows at 2nd Watch, but I really think it’s best to start at the very beginning. We need to understand “why” we need to automate our delivery. In enterprise organizations, this delivery is usually setup in a very waterfall way. The artifact is handed off to another team to push to the different environments and to QA to test. Sometimes it works but usually not so much. In the “not so much” case, it’s handed back to DEV which interrupts their current work.

That back and forth between teams is known as waste in the Lean/Agile world. Also known as “Throwing it over the wall” or “Hand offs.” This is a primary aspect any Agile process intends to eliminate and what’s led to the DevOps movement.

Now DevOps is a loaded term, much like “Agile” and “SCRUM.” It has its ultimate meaning, but most companies go part way and then call it won. The changes that get the biggest positive effects are cultural, but many look at it and see the shiny “automation” as the point of it all. Automation helps, but the core of your automation should be driven by the culture of quality over all. Just keep that in mind as you read though this article, which is specifically about all that yummy automation.

There’s a Process

Baby steps here. You can’t have one without the other, so there are a series of things that need to happen before you make it to a fully automated process.

Before that though, we need to look at what a deployment is and what the components of a deployment are.

First and foremost, so long as you have your separate environments, development has no impact on the customer, therefore no impact to the business at large. There is, essentially, no risk while a feature is in development. However, the business assumes ALL the risk when there is a deployment. Once you cross that line, the customer will interact with it and either love it, hate it, or ignore it. From a development standpoint, you work to minimize that risk before you cross the deployment line – Different environments, testing, a release process etc. These are all things that can be automated, but only when that risk has been sufficiently minimized.

Step I: Automated testing

You can’t do CI or CD without testing, and it’s the logical first step. In order to help minimize the deployment risk, you should automate ALL of your testing. This will greatly increase your confidence that changes introduced will not impact the product in ways that you may not know about BEFORE you cross that risk point in deployment. The closer an error occurs to the time at which it’s implemented the better. Automated testing greatly reduces this gap by providing feedback to the implementer faster while being able to provide repeatable results.

Step II: Continuous Integration

Your next step is to automate your integration (and integration tests, right?), which should further provide you with confidence in the change that you and your peers have introduced. The smaller the gap between integrations (just as with testing), the better, as you’ll provide feedback to the implementers faster. This means you can operate on any problems while your changes are fresh in your mind. Utilizing multiple build strategies for the same product can help as well. For instance, running integration on a push to your scm (Source Control Management), as well as nightly builds.

Remember, this is shrinking that risk factor before deployment.

Step III: Continuous Deployment

With Continuous Deployment you take traditional Continuous Delivery a step further by automatically pushing the artifacts created by a Continuous Delivery process into production. This automation of deployments is the final step and is another important process in mitigating that risk for when you push to production. Deploying to each environment and then running that environment’s specific set of tests is your final check before you are able to say with confidence that a change did not introduce a fault. Remember, you can automate the environments as well by using infrastructure as code tooling around virtual technology (i.e. The Cloud).

Conclusion

Continuous Deployment is the ultimate goal, as a change introduced into the system triggers all of your confidence-building tools to minimize the risk to your customers once it’s been deployed to the production system. Automating it all not only improves the quality, but reduces the feedback to the implementer, increasing efficiency as well.

I hope that’s a good introduction! In our next post, we’ll take a more technical look at the tooling and automation we use in one of our work flows.

-Craig Monson, Sr Automation Architect (Primary)

-Lars Cromley, Director of Engineering (Editor)

-Ryan Kennedy, Principal Cloud Automation Architect (Editor)

Facebooktwittergoogle_pluslinkedinmailrss

Five Pitfalls to Avoid When Migrating to the Cloud

Tech leaders are increasingly turning to the cloud for cost savings and convenience, but getting to the cloud is not so easy. Here are the top 5 pitfalls to avoid when migrating to the cloud.

1. Choosing the wrong migration approach

There are 6 migration strategies, and getting to these 6 takes a considerable amount of work. Jumping into a cloud migration without the due diligence, analysis, grouping and risk ranking is ill-advised. Organizations need to conduct in depth application analyses to determine the correct migration approach. Not all applications are cloud ready and those that are may take some toying with when there. Take the time to really understand HOW your application works, how it will work in the cloud and what needs to be done to migrate it there successfully. 2nd Watch approaches all cloud migrations using our Cloud Factory Model, which includes the following phases – discovery, design and requirement gathering, application analysis, migration design, migration planning and migration(s).

These 6 migration strategies include:

  • Retain – Leaving it as is. It could be a mistake to move the application to the cloud.
  • Rehost “aka” Lift and Shift – Migrating the application as-is into the cloud.
  • Replatform – Characterized as re-imagining how the application is architected and developed, typically using cloud-native features. Basically, you’re throwing away and designing something new or maybe switching over to a SaaS tool altogether.
  • Retire – No migration target and/or application host decommission on source.
  • Re-architect/Refactor – Migration of the current application to use the cloud in the most efficient, thorough way possible, incorporating the best features to modernize the application. This is the most complex migration method as it often involves re-writing of code to decouple the application to fully support all the major benefits the cloud provides. The redesign and re-engineering of the application and infrastructure architecture are also key in this type of migration.

From a complexity standpoint, replatform and rearchitect/refactor are the most complicated migration approaches. However, it depends on the application and how you are replatforming (for example, if you’re going to SaaS, it may be a very simple transition.  If you’re rebuilding your application on Lambda and DynamoDB, not so much).

2. Big Bang Migration

Some organizations are under the impression that they must move everything at once. This is the furthest from the truth. The reality is that organizations are in hybrid models (On-Prem and Cloud) for a long time because it’s very hard to move some workloads.

It is key to come up with a migration design and plan which includes a strategic portfolio analysis or Cloud Readiness Assessment that assesses each application’s cloud readiness, identifies dependencies between applications, ranks applications by complexity and importance, and identifies the ideal migration path.

3. Underestimating Work Involved and Integration

Migrating to the cloud is not a walk in the park. You must have the knowledge, skill and solid migration design to successfully migrate workloads to the cloud. When businesses hear the words “lift and shift” they are mistakenly under the impression that all one has to do is press a button (migrate) and then it’s “in the cloud.” This is a misnomer that needs to be explained. Underestimating integration is one of the largest factors of failure.

With all of the cheerleading about of the benefits of moving to the cloud, deploying to the cloud adds a layer of complexity, especially when organizations are layering cloud solutions on top of legacy systems and software. It is key to ensure that the migration solution chosen is able to be integrated with your existing systems. Moving workloads to the cloud requires integration and an investment in it as well. Organizations need to have a solid overall architectural design and determine what’s owned, what’s being accessed and ultimately what’s being leveraged.

Lastly, code changes that are required to make move are also often underestimated. Organizations need to remember it is not just about moving virtual machines. The code may not work the same way running in the cloud, which means the subsequent changes required may be deep and wide.

4. Poor Business Case

Determine the value of a cloud migration before jumping into one. What does this mean? Determine what your company expects to gain after you migrate. Is it cost savings from exiting the data center? Will this create new business opportunities? Faster time to market? Organizations need to quantify the benefits before you move.

I have seen some companies experience buyer’s remorse due to the fact that their business case was not multifaceted. It was myopic – exiting the datacenter. Put more focus on the benefits your organization will receive from the agility and ability to enter new markets faster using cloud technologies. Yes, the CapEx savings are great, but the long-lasting business impacts carry a lot of weight as well because you might find that, once you get to the cloud, you don’t save much on infrastructure costs.

5. Not trusting Project Management

An experienced, well versed and savvy project manager needs to lead the cloud migration in concert with the CIO. While the project manager oversees and implements the migration plan and leads the migration process and technical teams, the CIO is educating the business decision makers at the same time. This “team” approach does a number of things. First, it allows the CIO to act as the advisor and consultant to the business – helping them select the right kind of services to meet their needs. Second, it leaves project management to a professional. And lastly, by allowing the project manager to manage, the CIO can evaluate and monitor how the business uses the service to make sure it’s providing the best return on investment.

Yvette Schmitter, Sr Manager, Project Portfolio Management, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

Cloud Transformation Through ITIL Service Strategy

For some IT organizations the cloud computing paradigm poses critical existential questions; How does my IT organization stay relevant in a cloud environment? How does IT still provide value to the business? What can be done to improve the business’ perception of IT’s contribution to the company? Without a clear approach to tackling these and other related questions, IT organizations stumble into a partially thought-out cloud computing strategy and miss out on capturing the short and long-term financial ROI and transformational benefits of a cloud-first strategy.

Several key concepts and principles from ITIL’s Service Strategy lifecycle stage lend themselves to defining and guiding a strategic approach to adopting and implementing a cloud-first strategy. In this article, we’ll highlight and define some of these key principles and outline a tactical approach to implementing a cloud-first strategy.

One of the key concepts leveraged in ITIL’s Service Strategy is the Run-Grow-Transform framework from Gartner.  From an executive management perspective, the IT organization’s contribution to the company’s goals and objectives can be framed along the Run-Grow-Transform model – specifically around how IT can help the company (1) Run-The-Business, (2) Grow-The-Business, and (3) Transform-The-Business.

The CIO’s value is both objectively and subjectively measured by answering:

1 – How can IT reduce the cost of current IT operations, thus improving the bottom line?

2 – How can IT help the business expand and gain greater market share with our current business offerings?

3 – How can IT empower the business to venture out into new opportunities and/or develop new competitive business advantage?

We’ll take a close look at each model area, highlight key characteristics, and give examples of how a cloud-first policy can enable a CIO to contribute to the companies’ goals and objectives and not only remain relevant to the organization but enable business innovation.

Run-the-Business and Cloud-First Strategy

Run the Business (RTB) is about supporting essential business operations and processes. This usually translates to typical IT services and operations such as email-messaging systems, HR services, Payroll and Financial systems. The core functionality these IT services provide is necessary and essential but not differentiating to the business. These are generally viewed as basic core commodity services, required IT costs for keeping the business operational.

The CIO’s objective is to minimize the cost of RTB activities without any comprise to the quality of service. A cloud-first policy can achieve these outcomes. It can reduce costs by moving low value-add IT activities (sometimes referred to as ‘non-differentiating work’) to a cloud provider that excels at performing the same work with hyper efficiency. Add in the ability of a cloud provider to leverage economies of scale and you have a source of reliable, highly cost-optimized IT services that cannot be matched by any traditional data center or hosting provider (see AWS’s James Hamilton discuss data center architecture at scale). Case studies from GE, Covanta, and Conde Nast bare out the benefit of moving to AWS and enabling their respective CIOs to improve their  business’ bottom line.

Grow-the-Business and Cloud First Strategy

Grow the Business (GTB) activities are marked by enabling the business to successfully increase market share and overall revenue in existing markets. If a company doubles its customer base, then the IT organization responds with timely and flexible capacity to support such growth. Generally, an increase in GTB spending should be tied to an increase in business revenue.

Cloud computing providers, such as AWS, are uniquely capable to support GTB initiatives. AWS’ rapid elasticity drastically alters the traditional management of IT demand and capacity. A classic case in point is the “Black Friday” phenomena. If the IT organization does not have sufficient IT resources to accommodate the projected increase in business volume, then the company risks missing out on revenue capture and may experience a negative brand impact. If the IT organization overprovisions its IT resources, then unnecessary costs are incurred and it adversely affects the company’s profits. Other similar business phenomena include “Cyber Monday,” Super Bowl Ads, and product launches. Without a highly available and elastic cloud computing environment, IT will struggle to support GTB activities (see AWS whitepaper “Infrastructure Event Readiness” for a similar perspective).

A cloud’s elasticity solves both ends of the spectrum scenarios by not only being able to ramp up quickly in response to increased business demand, but also scale down when demand subsides. Additionally, AWS’ pay-for-what-you-use model is a powerful differentiating feature. Some key uses cases include Crate & Barrel and Coca-Cola. Through a cloud-first strategy, a CIO is able to respond to GTB initiatives and activities in a cost-optimized manner.

Transform-the-Business and Cloud Computing

Transform the Business (TTB) represents opportunities for a company to make high risk but high reward investments. This usually entails moving into a new market segment with a new business or product offering. Innovation is the key success factor in TTB initiatives. Traditionally this is high risk to the business because of the upfront investment required to support new business initiatives. But in order to innovate, IT and business leaders need to experiment, to prototype and test new ideas.

With a cloud-first policy, the IT organization can mitigate the high-risk investment, yet still obtain the high rewards by enabling a ‘fail early, fail fast’ strategy in a cloud environment. Boxever is a case study in fail fast prototyping. Alan Giles, CTO of Boxever, credits AWS with the ability to know within days “if our design and assumptions [are] valid. The time and cost savings of this approach are nearly incalculable, but are definitely significant in terms of time to market, resourcing, and cash flow.” This cloud-based fail-fast approach can be applied to all market-segments, including government agencies. The hidden value in a cloud-based fail fast strategy is that failure is affordable and OK, making it easier to experiment and innovate. As Richard Harshman, Head of ASEAN for Amazon Web Services, puts it, “Don’t be afraid to experiment. The cloud allows you to fail fast and fail cheap. If and when you succeed, it allows you to scale infinitely and go global in minutes”.

So what does a cloud-first strategy look like?

While this is a rudimentary, back-of-the-envelope style outline, it provides a high-level, practical methodology for implementing a cloud-first based policy.

For RTB initiatives: Move undifferentiated shared services and supporting services to the cloud, either through Infrastructure-as-a-Service (IaaS) or Software-as-a-Service (SaaS) based solutions.

For GTB initiatives: Move customer-facing services to the cloud to leverage dynamic supply and demand capacity.

For TTB initiatives: Set up and teardown cloud environments to test and prototype new ideas and business offerings at minimal cost.

In addition to the Run-Grow-Transform framework, the ITIL Service Strategy lifecycle stage provides additional guidance from its Service Portfolio Management, Demand Management, and Financial Management process domains that can be leveraged to guide a cloud-first based strategy. These principles, coupled with other related guidance such as AWS Cloud Adoption Framework, provide a meaningful blueprint for IT organizations to quickly embrace a cloud-first strategy in a structured and methodical manner.

By aggressively embracing a cloud-first strategy, CIOs can demonstrate their business relevance through RTB and GTB initiatives. Through TTB initiatives IT can facilitate business innovation and transformation, yielding greater value to their customers. We are here to help our customers, so if you need help developing a cloud-first strategy, contact us here.

-Vince Lo Faso, Solutions Architect

Facebooktwittergoogle_pluslinkedinmailrss

Migrating Terraform Remote State to a “Backend” in Terraform v.0.9+

(AKA: Where the heck did ‘terraform remote config’ go?!!!)

If you are working with cloud-based architectures or working in a DevOps shop, you’ve no doubt been managing your infrastructure as code. It’s also likely that you are familiar with tools like Amazon CloudFormation and Terraform for defining and building your cloud architecture and infrastructure. For a good comparison on Amazon CloudFormation and Terraform check out Coin Graham’s blog on the matter: AWS CFT vs. Terraform: Advantages and Disadvantages.

If you are already familiar with Terraform, then you may have encountered a recent change to the way remote state is handled, starting with Terraform v0.9. Continue reading to find out more about migrating Terraform Remote State to a “Backend” in Terraform v.0.9+.

First off… if you are unfamiliar with what remote state is check out this page.

Remote state is a big ol’ blob of JSON that stores the configuration details and state of your Terraform configuration and infrastructure that has actually been deployed. This is pretty dang important if you ever plan on changing your environment (which is “likely”, to put it lightly) and especially important if you want to have more than one person managing/editing/maintaining the infrastructure, or if you have even the most basic rationale as it pertains to backup and recovery.

Terraform supports almost a dozen backend types (as of this writing) including:

  • Artifactory
  • Azure
  • Consul
  • Etcd
  • Gcs
  • Http
  • Manta
  • S3
  • Swift
  • Terraform Enterprise (AKA: Atlas)

 

Why not just keep the Terraform state in the same git repo I keep the Terraform code in?

You also don’t want to store the state file in a code repository because it may contain sensitive information like DB passwords, or simply because the state is prone to frequent changes and it might be easy to forget to push those changes to your git repo any time you run Terraform.

So, what happened to terraform remote anyway?

If you’re like me, you probably run the la version of HashiCorp’s Terraform tool as soon as it is available (we actually have a hook in our team Slack channel that notifies us when a new version is released). With the release of Terraform v.0.9 last month, we were endowed with the usual heaping helping of excellent new features and bug-fixes we’ve come to expect from the folks at HashiCorp, but were also met with an unexpected change in the way remote state is handled.

Unless you are religious about reading the release notes, you may have missed an important change in v.0.9 around the remote state. While the release notes don’t specifically call out the removal (not even deprecation, but FULL removal) of the prior method (e.g. with Terraform remote config, the Upgrade Guide specifically calls out the process in migrating from the legacy method to the new method of managing remote state). More specifically, they provide a link to a guide for migrating from the legacy remote state config to the new backend system. The steps are pretty straightforward and the new approach is much improved over the prior method for managing remote state. So, while the change is good, a deprecation warning in v.0.8 would have been much appreciated. At least it is still backwards compatible with the legacy remote state files (up to version 0.10), making the migration process much less painful.

Prior to v.0.9, you may have been managing your Terraform remote state in an S3 bucket utilizing the Terraform remote config command. You could provide arguments like: backend and backend-config to configure things like the S3 region, bucket, and key where you wanted to store your remote state. Most often, this looked like a shell script in the root directory of your Terraform directory that you ran whenever you wanted to initialize or configure your backend for that project.

Something like…

Terraform Legacy Remote S3 Backend Configuration Example
#!/bin/sh
export AWS_PROFILE=myprofile
terraform remote config \
--backend=s3 \
--backend-config="bucket=my-tfstates" \
--backend-config="key=projectX.tfstate" \
--backend-config="region=us-west-2"

This was a bit clunky but functional. Regardless, it was rather annoying having some configuration elements outside of the normal terraform config (*.tf) files.

Along came Terraform v.0.9

The introduction of Terraform v.0.9 with its new fangled “Backends” makes things much more seamless and transparent.  Now we can replicate that same remote state backend configuration with a Backend Resource in a Terraform configuration like so:

Terraform S3 Backend Configuration Example
terraform {
  backend "s3" {
    bucket = "my-tfstates"
    key    = "projectX.tfstate"
    region = "us-west-2"
  }
}
A Migration Example

So, using our examples above let’s walk through the process of migrating from a legacy “remote config” to a “backend”.  Detailed instructions for the following can be found here.

1. (Prior to upgrading to Terraform v.0.9+) Pull remote config with pre v.0.9 Terraform

> terraform remote pull
Local and remote state in sync

2. Backup your terraform.tfstate file

> cp .terraform/terraform.tfstate 
/path/to/backup

3. Upgrade Terraform to v.0.9+

4. Configure the new backend

terraform {
  backend "s3" {
    bucket = "my-tfstates"
    key    = "projectX.tfstate"
    region = "us-west-2"
  }
}

5. Run Terraform init

> terraform init
Downloading modules (if any)...
 
Initializing the backend...
New backend configuration detected with legacy remote state!
 
Terraform has detected that you're attempting to configure a new backend.
At the same time, legacy remote state configuration was found. Terraform will
first configure the new backend, and then ask if you'd like to migrate
your remote state to the new backend.
 
 
Do you want to copy the legacy remote state from "s3"?
  Terraform can copy the existing state in your legacy remote state
  backend to your newly configured backend. Please answer "yes" or "no".
 
  Enter a value: no
  
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
 
Terraform has been successfully initialized!
 
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
 
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.

6. Verify the new state is copacetic

> terraform plan
 
...
 
No changes. Infrastructure is up-to-date.
 
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, Terraform
doesn't need to do anything.

7.  Commit and push

In closing…

Managing your infrastructure as code isn’t rocket science, but it also isn’t trivial.  Having a solid understanding of cloud architectures, the Well Architected Framework, and DevOps best practices can greatly impact the success you have.  A lot goes into architecting and engineering solutions in a way that maximizes your business value, application reliability, agility, velocity, and key differentiators.  This can be a daunting task, but it doesn’t have to be!  2nd Watch has the people, processes, and tools to make managing your cloud infrastructure as code a breeze! Contact us today to find out how.

 

— Ryan Kennedy, Principal Cloud Automation Architect, 2nd Watch

 

Facebooktwittergoogle_pluslinkedinmailrss

What we learned from Werner Vogels’s 2016 re:Invent Keynote Presentation

It’s all about The Transformation

At this morning’s AWS re:Invent keynote, AWS shared quite a mountain of information, and a toolbox of new services, all based around helping companies change their businesses and the way they look at technology.  Transformation was the keyword for this presentation, and it was apparent in the tools and tone taken throughout the whole two and a half hours.  The focus was on providing the tools to the “Transformers” (Highlighted by Vogel’s Autobot T-shirt), and enabling them to do amazing things for their customers. Vogel’s keynote was less about infrastructure, more about the software and how to get it into the hands of your customers, and how the toolbox that AWS continues to expand can help.  It’s not entirely about AWS though… it starts with their customers.

AWS: To Be the Most Customer Centric IT Company on Earth

There’s a large drive from all the teams at AWS to focus on the needs of their customers (that’s you by the way).  In fact, this couldn’t be more evident than with their new offering called AWS Blox, an open source scheduler for ECS that’ll be co-developed with the community.  This can also be seen in their 5 customer centric objectives:

  1. Protect the customers at all times.
  2. Listen closely to customers and act.
  3. Give customers choice.
  4. Work backwards from the customer.
  5. Help customers transform.

This led nicely into Jeff Lawson’s (CEO / Chairman – Twilio) presentation which revolved around software development.  The two things to take away from this were a couple of quotes: 1. “Building software is a mindset, not a skillset,” which speaks immeasurably to the idea of the enveloping purpose of software in the first place.  Software drives products to customers.  And 2. “Companies that win are companies that ship software.”

How can we help you be a Transformer?

There are a plethora of modern day processes revolving around Agile practices, which involve feature deployment speed to your customers.  The big, main point here is that Amazon really wants to take as much of the waste off of their customers’ shoulders as possible and manage it for them.  This is one of the fundamental principals in lean manufacturing and Agile development processes. Cut waste, so your people can concentrate on what’s important to your customer –  Providing stellar products and features.

To that end, AWS already provides everything you’ll need as far as infrastructure is concerned.  Need a thousand instances for a load ?  Spin them up, run your , then tear them down, and only pay for that hour you had them up.  That’s the bread and butter.  Where AWS is moving now is to help that development pipeline and to provide the tools to do it.

First and foremost, they’ve updated their Well Architected Framework (along with all the underlying documentation) to include a 5th pillar:

  1. Security
  2. Reliability
  3. Performance Efficiency
  4. Cost Optimization
  5. Operational Excellence (This is where Automation and CI/CD pipelines come into play.)

Transforming Operational Excellence

Automation is the name of the game here.  The existing tools have gotten some updates, and there are some new ones to add to your armory as well.

AWS CloudFormation has seen a ton of updates this past year including role-based stack creation, failure recovery, resource schemas and last but by far not least, yaml support!  Configuration management (in the form of Chef) has gotten a BIG boost in their new AWS Opsworks For Chef Automate, a fully managed chef server.  Oh, and managing system level patching and resource configuration?  They’ve got that covered as well with the Amazon EC2 Systems Manager.  The Biggest changes come to help your CI/CD pipeline.  The new AWS CodeBuild will build and your projects and fills out the pipeline toolset (between CodeCommit and CodeDeploy).  What about insight into your application?  The fantastic looking X-Ray will allow insight into your applications on a very deep level, with a smart looking UI to boot.  Another nice looking UI of a tool to handle managing events from your infrastructure is AWS Personal Health Dashboard.  This tool will help you manage responses to your events, and can be tied into Lambda for automation.

Security is number one with AWS, so it’s no surprise that they’re offering two new tools to help protect against the common DDOS attack.  The first, AWS Shield will help protect against some of the more common DDOS attack vectors.  The best thing about it?  Everyone gets it FOR FREE!  You use AWS, you get AWS Shield.  That simple.  AWS Shield Advanced is for more complex attacks and is a paid service that you can opt in for if you feel the need.

Transforming your Data

Amazon’s cloud offering levels the playing field when it comes to resource procurement.  Small companies can now compete with the big ones since they draw from the same pool and have the same tools available to them (regardless of size).  So what’s your competitive differentiator?  Data.  That’s why another focus of this past year has been on Big Data.

AWS already has a lot going for it with data analytics, from ingestion tools like Kinesis and Snowball to processing with EMR, there just seemed to be one thing missing:  AWS Glue.  AWS Glue pulls together all the components of Modern Data Warehouses into a comprehensive architecture for data analytics.  From data ingestion to data quality, source data preservation to orchestration and job scheduling, it looks like AWS Glue will manage it all.  Also on the processing end, the new AWS Batch tool will manage batch processing at any scale.

Transforming your Application Architecture

Amazon now provides 3 different architectures and payment styles when it comes to application development (or deployment if you look at it that way) – Virtualization, which is already quite robust in their compute ecosystem; Containers, which have an ever maturing product in ECS; and Serverless, which is handled quite well through services like AWS Lambda.  Virtualization didn’t get a particular mention here, but Containerization did.  Blox was already mentioned above, but there was also a “coming soon” drop here as well.  Looks like we’ll be seeing some kind of task placement engine in the near future.

Next up were new offerings around Lambda.  The first, and one that will surely broaden the adoption of serverless architectures, is the inclusion of the C# language into the list of supported languages.  To cut back on possible latency issues, you can now run Lambda functions at CloudFront locations using the new AWS Lambda@Edge.  To help coordinate all the components of your distributed applications, you now have AWS Step Functions.  This tool will allow you to coordinate all your bits and pieces using a visual workflow.

There’s a lot of potential for transforming your business here.

Like always, AWS doesn’t force you to use any particular tool or service, but they have a lot of what you need to develop products and features the right way.  They’ve made some serious strides to pull as much of the wasted, non-customer centric work away from your teams, and give them back that time to push more value to your customers.  Amazon doesn’t yet approach the organizational / process side of the equation, so that will still fall to the customer.  Once you figure it out though, it looks like AWS is positioned, and will continue to position itself, to help you and your teams make that transformation a reality.

-Craig Monson, Sr Automation Architect

Facebooktwittergoogle_pluslinkedinmailrss

Writing CloudFormation Templates in YAML – A First Look

AWS recently released a new “game changing” feature for CloudFormation Templates –  support for YAML.  I’d like to give a first look at utilizing YAML for CloudFormation Templates (CFTs) and discuss how this feature might be incorporated in the architect and engineer’s toolbox for AWS.  If you’re not familiar with YAML you’ll want to take a look at the guides here.

YAML Support

This is something the AWS community has been begging for, for quite a while.  One of the easiest ways to tell that JSON is not sufficient is the numerous projects that exist to support JSON based templates (Troposphere, Sparkleformation, Terraform, etc).  Now with YAML support we’re getting that much closer to that Infrastructure-as-Code feeling we’ve been missing.  Let’s walk through some sample YAML CFT code and highlight where it has a major impact.  The code samples below are borrowed almost entirely from the AWS UserGuide for CloudFormation.

AWSTemplateFormatVersion: "2010-09-09"

Description: A sample template

Parameters:
  FilePath:
    Description: The path of the  file.
    Type: String
    Default: /home/ec2-user/userdata

Resources:
  MyEC2Instance:
    Type: "AWS::EC2::Instance" # 1 Quotes are unnecessary here - will they always be?
    Properties:
      ImageId: ami-c481fad3    # 2 Quotes removed from the example - still works
      InstanceType: t2.nano
      KeyName: 2ndwatch-sample-keypair
      Tags:                    # 3 Here I switch to using single spaces
       - Key: Role             # 4 Tag list item is inline
         Value: Test Instance
       -                       # 5 Next list item is block
         Key: Owner
         Value: 2ndWatch
      BlockDeviceMappings:     # 6 Switch back to double spaces
        -
          DeviceName: /dev/sdm
          Ebs:
            VolumeType: gp2
            VolumeSize: 10
      UserData:
        Fn::Base64: !Sub |     # No more Fn::Join needed
          #!/bin/bash
          echo "Testing Userdata" > ${FilePath}
          chown ec2-user.ec2-user ${FilePath}

A couple of things you notice in this example are how clean the code looks and the comments.  These are both necessary to make code descriptive and clear.  In the comments I call out a few considerations with the YAML format.  First, in many of the examples AWS provides there are quotes around values that don’t need them.  When I removed them (comment 1 and 2), the CFT still worked.  That said, you may want to codify on quotes/no quotes at the start of your project or for your entire department/company for consistency.  Additionally, as you will notice in my second set of comments, I switch from 2-space to 1-space YAML formatting (comments #3 and #6).  This is “legal” but annoying.  Just as with JSON, you’ll need to set some of your own rules for how the formatting is done to ensure consistency.

Taking a look at the Tags section you’ll see that lists are supported using a hyphen notation.  In the Tags property I’ve displayed two different formats for how a list item may be denoted.  1. There can be a hyphen alone on a line with a “block” underneath (comment #5) or 2. Inline with the hyphen and the rest following after with the same spacing (comment #4).  As before, you’ll want to decide how you want to format lists.  Multiple AWS examples do it in different ways.

Following on to the userdata property, the next thing you’ll notice is the lack of the Fn::Join function.  This makes the creation of userdata scripts very close to the actual script you would run on the server.  In a previous article I gave Terraform high marks for having similar functionality, and now AWS has brought CFT up to par.  The new !Sub notation helps clean up the substitution a bit, too (it’s also available in JSON).  Of course if you miss it, the Fn::Join can still be used like this:

Tags:
- Key: Name
  Value:
    Fn::Join:
    - "-"
    - - !Ref AWS::StackName
      - PrivateRouteTable1

This would produce a tag of Name = StackName-PrivateRouteTable1 just like it did previously in JSON, but we would advise against doing this because the old notation is much less flexible and more prone to error than the new “joinless” formatting. Notice that nested lists are created using two hyphens.

Conversion Tools

In another bit of good news, you can utilize online conversion tools to update your JSON CFTs to YAML.  As you might guess, it will take a bit of cleanup to bring it in line with whatever formatting decisions you’ve made, but it gets you most of the way there without a complete rewrite.  Initial s on simple CFTs ran with no updates required (using http://www.json2yaml.com/).  A second on a 3000-line CFT converted down to 2300 lines of YAML and also ran without needing any updates (YMMV).  This is a big advantage over tools like Terraform where all new templates would have to be built from scratch, particularly since a conversion tool could probably be whipped together in short order.

All in all, this is a great update to the CloudFormation service and demonstrates AWS‘s commitment to pushing the service forward.

If you have questions or need help getting started, please Contact Us at 2nd Watch.

-Coin Graham, Sr Cloud Consultant, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss