1-888-317-7920 info@2ndwatch.com

What we learned from Werner Vogels’s 2016 re:Invent Keynote Presentation

It’s all about The Transformation

At this morning’s AWS re:Invent keynote, AWS shared quite a mountain of information, and a toolbox of new services, all based around helping companies change their businesses and the way they look at technology.  Transformation was the keyword for this presentation, and it was apparent in the tools and tone taken throughout the whole two and a half hours.  The focus was on providing the tools to the “Transformers” (Highlighted by Vogel’s Autobot T-shirt), and enabling them to do amazing things for their customers. Vogel’s keynote was less about infrastructure, more about the software and how to get it into the hands of your customers, and how the toolbox that AWS continues to expand can help.  It’s not entirely about AWS though… it starts with their customers.

AWS: To Be the Most Customer Centric IT Company on Earth

There’s a large drive from all the teams at AWS to focus on the needs of their customers (that’s you by the way).  In fact, this couldn’t be more evident than with their new offering called AWS Blox, an open source scheduler for ECS that’ll be co-developed with the community.  This can also be seen in their 5 customer centric objectives:

  1. Protect the customers at all times.
  2. Listen closely to customers and act.
  3. Give customers choice.
  4. Work backwards from the customer.
  5. Help customers transform.

This led nicely into Jeff Lawson’s (CEO / Chairman – Twilio) presentation which revolved around software development.  The two things to take away from this were a couple of quotes: 1. “Building software is a mindset, not a skillset,” which speaks immeasurably to the idea of the enveloping purpose of software in the first place.  Software drives products to customers.  And 2. “Companies that win are companies that ship software.”

How can we help you be a Transformer?

There are a plethora of modern day processes revolving around Agile practices, which involve feature deployment speed to your customers.  The big, main point here is that Amazon really wants to take as much of the waste off of their customers’ shoulders as possible and manage it for them.  This is one of the fundamental principals in lean manufacturing and Agile development processes. Cut waste, so your people can concentrate on what’s important to your customer –  Providing stellar products and features.

To that end, AWS already provides everything you’ll need as far as infrastructure is concerned.  Need a thousand instances for a load ?  Spin them up, run your , then tear them down, and only pay for that hour you had them up.  That’s the bread and butter.  Where AWS is moving now is to help that development pipeline and to provide the tools to do it.

First and foremost, they’ve updated their Well Architected Framework (along with all the underlying documentation) to include a 5th pillar:

  1. Security
  2. Reliability
  3. Performance Efficiency
  4. Cost Optimization
  5. Operational Excellence (This is where Automation and CI/CD pipelines come into play.)

Transforming Operational Excellence

Automation is the name of the game here.  The existing tools have gotten some updates, and there are some new ones to add to your armory as well.

AWS CloudFormation has seen a ton of updates this past year including role-based stack creation, failure recovery, resource schemas and last but by far not least, yaml support!  Configuration management (in the form of Chef) has gotten a BIG boost in their new AWS Opsworks For Chef Automate, a fully managed chef server.  Oh, and managing system level patching and resource configuration?  They’ve got that covered as well with the Amazon EC2 Systems Manager.  The Biggest changes come to help your CI/CD pipeline.  The new AWS CodeBuild will build and your projects and fills out the pipeline toolset (between CodeCommit and CodeDeploy).  What about insight into your application?  The fantastic looking X-Ray will allow insight into your applications on a very deep level, with a smart looking UI to boot.  Another nice looking UI of a tool to handle managing events from your infrastructure is AWS Personal Health Dashboard.  This tool will help you manage responses to your events, and can be tied into Lambda for automation.

Security is number one with AWS, so it’s no surprise that they’re offering two new tools to help protect against the common DDOS attack.  The first, AWS Shield will help protect against some of the more common DDOS attack vectors.  The best thing about it?  Everyone gets it FOR FREE!  You use AWS, you get AWS Shield.  That simple.  AWS Shield Advanced is for more complex attacks and is a paid service that you can opt in for if you feel the need.

Transforming your Data

Amazon’s cloud offering levels the playing field when it comes to resource procurement.  Small companies can now compete with the big ones since they draw from the same pool and have the same tools available to them (regardless of size).  So what’s your competitive differentiator?  Data.  That’s why another focus of this past year has been on Big Data.

AWS already has a lot going for it with data analytics, from ingestion tools like Kinesis and Snowball to processing with EMR, there just seemed to be one thing missing:  AWS Glue.  AWS Glue pulls together all the components of Modern Data Warehouses into a comprehensive architecture for data analytics.  From data ingestion to data quality, source data preservation to orchestration and job scheduling, it looks like AWS Glue will manage it all.  Also on the processing end, the new AWS Batch tool will manage batch processing at any scale.

Transforming your Application Architecture

Amazon now provides 3 different architectures and payment styles when it comes to application development (or deployment if you look at it that way) – Virtualization, which is already quite robust in their compute ecosystem; Containers, which have an ever maturing product in ECS; and Serverless, which is handled quite well through services like AWS Lambda.  Virtualization didn’t get a particular mention here, but Containerization did.  Blox was already mentioned above, but there was also a “coming soon” drop here as well.  Looks like we’ll be seeing some kind of task placement engine in the near future.

Next up were new offerings around Lambda.  The first, and one that will surely broaden the adoption of serverless architectures, is the inclusion of the C# language into the list of supported languages.  To cut back on possible latency issues, you can now run Lambda functions at CloudFront locations using the new AWS Lambda@Edge.  To help coordinate all the components of your distributed applications, you now have AWS Step Functions.  This tool will allow you to coordinate all your bits and pieces using a visual workflow.

There’s a lot of potential for transforming your business here.

Like always, AWS doesn’t force you to use any particular tool or service, but they have a lot of what you need to develop products and features the right way.  They’ve made some serious strides to pull as much of the wasted, non-customer centric work away from your teams, and give them back that time to push more value to your customers.  Amazon doesn’t yet approach the organizational / process side of the equation, so that will still fall to the customer.  Once you figure it out though, it looks like AWS is positioned, and will continue to position itself, to help you and your teams make that transformation a reality.

-Craig Monson, Sr Automation Architect

Facebooktwittergoogle_pluslinkedinmailrss

Writing CloudFormation Templates in YAML – A First Look

AWS recently released a new “game changing” feature for CloudFormation Templates –  support for YAML.  I’d like to give a first look at utilizing YAML for CloudFormation Templates (CFTs) and discuss how this feature might be incorporated in the architect and engineer’s toolbox for AWS.  If you’re not familiar with YAML you’ll want to take a look at the guides here.

YAML Support

This is something the AWS community has been begging for, for quite a while.  One of the easiest ways to tell that JSON is not sufficient is the numerous projects that exist to support JSON based templates (Troposphere, Sparkleformation, Terraform, etc).  Now with YAML support we’re getting that much closer to that Infrastructure-as-Code feeling we’ve been missing.  Let’s walk through some sample YAML CFT code and highlight where it has a major impact.  The code samples below are borrowed almost entirely from the AWS UserGuide for CloudFormation.

AWSTemplateFormatVersion: "2010-09-09"

Description: A sample template

Parameters:
  FilePath:
    Description: The path of the  file.
    Type: String
    Default: /home/ec2-user/userdata

Resources:
  MyEC2Instance:
    Type: "AWS::EC2::Instance" # 1 Quotes are unnecessary here - will they always be?
    Properties:
      ImageId: ami-c481fad3    # 2 Quotes removed from the example - still works
      InstanceType: t2.nano
      KeyName: 2ndwatch-sample-keypair
      Tags:                    # 3 Here I switch to using single spaces
       - Key: Role             # 4 Tag list item is inline
         Value: Test Instance
       -                       # 5 Next list item is block
         Key: Owner
         Value: 2ndWatch
      BlockDeviceMappings:     # 6 Switch back to double spaces
        -
          DeviceName: /dev/sdm
          Ebs:
            VolumeType: gp2
            VolumeSize: 10
      UserData:
        Fn::Base64: !Sub |     # No more Fn::Join needed
          #!/bin/bash
          echo "Testing Userdata" > ${FilePath}
          chown ec2-user.ec2-user ${FilePath}

A couple of things you notice in this example are how clean the code looks and the comments.  These are both necessary to make code descriptive and clear.  In the comments I call out a few considerations with the YAML format.  First, in many of the examples AWS provides there are quotes around values that don’t need them.  When I removed them (comment 1 and 2), the CFT still worked.  That said, you may want to codify on quotes/no quotes at the start of your project or for your entire department/company for consistency.  Additionally, as you will notice in my second set of comments, I switch from 2-space to 1-space YAML formatting (comments #3 and #6).  This is “legal” but annoying.  Just as with JSON, you’ll need to set some of your own rules for how the formatting is done to ensure consistency.

Taking a look at the Tags section you’ll see that lists are supported using a hyphen notation.  In the Tags property I’ve displayed two different formats for how a list item may be denoted.  1. There can be a hyphen alone on a line with a “block” underneath (comment #5) or 2. Inline with the hyphen and the rest following after with the same spacing (comment #4).  As before, you’ll want to decide how you want to format lists.  Multiple AWS examples do it in different ways.

Following on to the userdata property, the next thing you’ll notice is the lack of the Fn::Join function.  This makes the creation of userdata scripts very close to the actual script you would run on the server.  In a previous article I gave Terraform high marks for having similar functionality, and now AWS has brought CFT up to par.  The new !Sub notation helps clean up the substitution a bit, too (it’s also available in JSON).  Of course if you miss it, the Fn::Join can still be used like this:

Tags:
- Key: Name
  Value:
    Fn::Join:
    - "-"
    - - !Ref AWS::StackName
      - PrivateRouteTable1

This would produce a tag of Name = StackName-PrivateRouteTable1 just like it did previously in JSON, but we would advise against doing this because the old notation is much less flexible and more prone to error than the new “joinless” formatting. Notice that nested lists are created using two hyphens.

Conversion Tools

In another bit of good news, you can utilize online conversion tools to update your JSON CFTs to YAML.  As you might guess, it will take a bit of cleanup to bring it in line with whatever formatting decisions you’ve made, but it gets you most of the way there without a complete rewrite.  Initial s on simple CFTs ran with no updates required (using http://www.json2yaml.com/).  A second on a 3000-line CFT converted down to 2300 lines of YAML and also ran without needing any updates (YMMV).  This is a big advantage over tools like Terraform where all new templates would have to be built from scratch, particularly since a conversion tool could probably be whipped together in short order.

All in all, this is a great update to the CloudFormation service and demonstrates AWS‘s commitment to pushing the service forward.

If you have questions or need help getting started, please Contact Us at 2nd Watch.

-Coin Graham, Sr Cloud Consultant, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

A Step-by-Step Guide on Using AWS Lambda-Backed Custom Resources with Amazon CFTs

Amazon CloudFormation Template (CFT) custom resources allow for additional flexibility in building Amazon environments. These can be utilized in a number of ways to enhance automation and make deployment easier. Generally speaking you’ll want to engage custom resources when you need information that is normally not available to the CloudFormation Template in order to complete the processing of the template. This could be a Security Group in another account, or the most updated AMI, or a Spot Price analysis. Additionally it’s useful for creating functionality in CFT that doesn’t exist yet, like verifying that the database size you’ve chosen is valid, or checking if you have a valid password (our example). You won’t want to use it for anything “one-off” as it takes time to develop and process. You will also want to avoid using it for long running processes since AWS CloudFormation will timeout if the internal processes take too long.

To give you an easy example of how this is setup, we’re going to build an AWS Lambda-backed custom resource that will verify that a password is correct by having you type it in twice. If the passwords you type don’t match, the CFT will quit processing and rollback. This is a bit of functionality that’s missing from AWS CFT and can be very frustrating once your environment is deployed and you realize you fat fingered the password parameter. The basic areas we’ll be focusing on are AWS CloudFormation and AWS Lambda. This guide assumes you’re familiar with both of these already, but if you’re not, learn more about AWS Lambda here or AWS CFTs here.

You want to start with the CFT that you’re looking to add the custom resource to and make sure it is functional. It’s always best to start from a place of known good. Adding a Lambda-backed custom resource to a CFT consists of four basic parts:

1. IAM Role for Lambda Execution: This is the role that will be assigned to your Lambda function. You will utilize this role to give the lambda permissions to other parts of AWS as necessary. If you don’t need to add any permissions, just create a role that allows Lambda to write its logs out.

"LambdaExecutionRole": {
  "Type": "AWS::IAM::Role",
  "Properties": {
    "AssumeRolePolicyDocument": {
      "Version": "2012-10-17",
      "Statement": [{
        "Effect": "Allow",
        "Principal": {
          "Service": ["lambda.amazonaws.com"]
        },
        "Action": ["sts:AssumeRole"]
      }]
    },
    "Policies": [{
      "PolicyName": "lambdalogtocloudwatch",
      "PolicyDocument": {
        "Version": "2012-10-17",
        "Statement": [{
          "Effect": "Allow",
          "Action": ["logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"],
          "Resource": "arn:aws:logs:*:*:*"
        }]
      }
    }]
  }
}

2. The Lambda Function: There are two ways of introducing your Lambda function into your CFT. If it is small, you can embed your function directly into the CFT by using the “ZipFile”
option under the “Code” property of the “AWS::Lambda::Function” resource. Or you can use the “S3Bucket” option and reference an S3 bucket that has your code already present in zip
format. Note that if you use the S3 bucket option the user that deploys the CFT will need permissions to read from the bucket, not the Lambda function. Next you’ll set your “Handler,”
“Runtime,” “Timeout,” and “Role” (which should reference the ARN of the role you created previously). If you are using the ZipFile option, your handler is the default for the runtime.

"CheckPasswordsFunction": {
  "Type": "AWS::Lambda::Function",
  "Properties": {
    "Code": {
      "ZipFile": {
        "Fn::Join": ["\n", [
          "var response = require('cfn-response');",
          "exports.handler = function(event, context) {",
          " if (event.RequestType == 'Delete') {",
          " response.send(event, context, response.SUCCESS);",
          " return;", " }",
          " var password = event.ResourceProperties.Password;",
          " var confpassword = event.ResourceProperties.ConfirmPassword;",
          " var responseData = {};",
          " if (password == confpassword) {",
          " responseData = {'passwordcheck': 'Password Valid!'};",
          " response.send(event, context, response.SUCCESS, responseData);",
          " } else {",
          " responseData = {Error: 'Passwords do not match'};",
          " console.log(responseData.Error);",
          " responseData = {'passwordcheck': 'Password Invalid!'};",
          " response.send(event, context, response.FAILED, responseData);",
          " }", "};"
        ]]
      }
    },
    "Handler": "index.handler",
    "Runtime": "nodejs",
    "Timeout": "30",
    "Role": {
      "Fn::GetAtt": [
        "LambdaExecutionRole", "Arn"
      ]
    }
  }
}

3. The Lambda Callout: The Lambda callout is where you pass the variables from the CFT to your Lambda function. It’s important to name these appropriately and consider what effect case and naming conventions will have on the runtime you’re using. The “Service Token” property is the ARN of the Lambda function you just created and the rest of the properties are the variables you’re passing through.

"TestPasswords": {
  "Type": "Custom::LambdaCallout",
  "Properties": {
    "ServiceToken": {
      "Fn::GetAtt": [
        "CheckPasswordsFunction",
        "Arn"
      ]
    },
    "Password": {
      "Ref": "Password"
    },
    "ConfirmPassword": {
      "Ref": "ConfirmPassword"
    }
  }
}

4. The Response: There are two key parts of the response from the custom resources and this applies to non-Lambda custom resources too. The first is the “Status” of the response. If you return a status of “FAILED,” the CFT will short circuit and rollback. If you return a status of “SUCCESS,” then the CFT will continue to process. This is important because sometimes you’ll want to send SUCCESS even if your lambda didn’t produce the desired result. In the case of our PassCheck, we wanted to stop the CFT from moving forward to save time. Knowing at the end that the password were mismatched would not be very valuable. The second important piece of the response is the “Data.” This is how you pass information back to CloudFormation to process the result. You’ll set the “Data” variable in your code as a json and reference the json key/value pair back inside the CFT. You’ll use the “Fn::GetAtt” option to reference the Lambda callout you created previously and the key of the json data you’re interested in.

"Outputs": {
  "Results": {
    "Description": "Test Passwords Result",
    "Value": {
      "Fn::GetAtt": ["TestPasswords",
        "passwordcheck"
      ]
    }
  }
}

As far as your Lambda function is concerned, you may or may not need to reference variables sent from the CloudFormation Template. These variables will be in the “event”->”ResourceProperties” dictionary/hash. For example:

NodeJs

var password = event.ResourceProperties.Password

Python

password = event['ResourceProperties']['Password']

And similarly, once you’re function is completed processing you might need to send a response back to the CFT. Thankfully AWS has created some wrappers to make the response easier. For nodejs it is called “cfn-response” but is only available when using the “ZipFile” option. There is a similar package for Python, but you’ll need to bundle it with your Lambda and deploy from S3. Sending information back from your Lambda is as easy as setting the “Data” variable to a properly formatted json file and sending it back.

...
if (password == confpassword) {
responseData = {'passwordcheck': 'Password Valid!'};
response.send(event, context, response.SUCCESS, responseData);
...

That’s it. Creating a Lambda-backed custom resource can add all kinds of additional functionality and options to your CloudFormation Templates. Feel free to download the whole CFT here and it out or use it to learn more, or Contact Us at 2nd Watch to help in getting started.

-Coin Graham, Sr Cloud Engineer, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

Decision Points: Moving Enterprise Workloads to the Cloud

When you’re ready to move to the cloud, it’s truly a transformational time.  Determining your cloud strategy before moving too quickly is paramount.  It is important to make the hard and big decisions first.  You will be in the cloud for many years to come.  This can be a time to also remove years of technical debt.  After all, you want to migrate your workloads and not lift and shift your technical debt along with it.  At the same time, you do not want to experience “analysis paralysis” with all the decisions to be made.  Ultimately, you can have the speed, agility and cross-organizational support while providing the proper governance and guardrails.

Determining your migration strategy ahead of time is important for security, change management, and cost containment.  The promise of the cloud is great.  You might want to allow people to build and development environments at will.  You have smart and capable people.  They need help to quickly deploy.  Shadow IT results when innovative people are constrained from experimenting.  And often, the intention is that it will be temporary.  However, temporary quickly becomes permanent and undocumented without compliance.  The decision points listed in this article are important.  This is by no means a compressive list, as other items will likely reveal themselves during the process.

Decisions for Enterprise Cloud Migration – the Business of the Cloud:

  • Discovery – Can you get an accurate list of the application inventory?  What operating systems are in use?  Are all the applications still relevant or can they be retired?  What are the applications that have dependencies on other applications?  It may be hard to get this list together.  Some of these applications are likely many years old.  This can be time consuming and will help identify the true scope and cost estimates of moving to the cloud.  In many cases, third party discovery tools can aid in the discovery.
  • Vision and Education – Are the teams infighting and holding territory?  This can be related to understanding the cloud as well as they can.  It can be scary as all transformations are.  Y2K, client/server and the Internet revolution were scary as well.  We survived.  Plans for education to create awareness and capabilities will help.  There are also many misconceptions about the cloud in top management that will probably need to be addressed.

Strategic Decisions for the Cloud:

  • Which cloud providers are you going to use?  Clearly, Amazon Web Services is the leading cloud service provider.  However, a multi-cloud strategy may be important to the company as well.  How are you going to interconnect the cloud providers?
  • What account strategy will you use?  Will applications get their own account?  Or will accounts be aligned by business unit?  There are many different approaches to account strategy.  It will be hard to undue, so it is important to weigh the pros and cons of each strategy to account for billing, security and isolation.
  • What will your networking strategy be for networking in the cloud?  Will you use non-overlapping subnets managed with your on premise IP management?  Will you isolate production and non-production environments to separated block ranges in VPCs?  Or will you allow your migrated applications to only be accessed over the public internet instead of VPN?  It could also be a combination of these strategies.  There are many variables that will need to be identified to determine the best strategy.
  • Will you integrate on premise Identity management systems with your cloud infrastructure?  Active directory is common technology in most enterprises.  Will you extend your current AD architecture?  What changes need to be made to make it optimal for the cloud?

Decisions for Cost, Security and Compliance:

  • How will you tag your cloud assets?  Will it account for billing, security, and compliance?  Getting this right early on will allow for automation to enforce compliance and monitor for violations.
  • How will you manage the cloud costs?  Will you allow developers to provision their own instances?  What will your Reserved Instance strategy be?  How often does it need to be reviewed?  Costs in the cloud can spin out of control if proper guardrails are not established.  Scheduled power on and power off of environments is also another important strategy to further reduce costs.
  • What technologies are approved for cloud deployments?  Will your organization create approved images?  How will they be managed and updated?  Maybe your organization has approved base software that must be installed.  How will you maintain this configuration?  Configuration management and image baking are important processes to identify and define.
  • How will the cloud assets be continuously monitored for compliance?  Once a violation is found, how will it be remediated, with automation or manually?  Between AWS Config, CloudTrail and Tagging strategies, much of this task can be accomplished with automation.  However, there still needs to be individuals that review and update the process.
  • How will you secure your cloud environments?  WAF, anti-virus, IDS/IPS, and firewalls are just part of the overall security solution.  How will you control egress traffic as well?  How will you isolate your applications from each other and control user access?  We all know security is hard and requires constant care.  Find the right balance between real threats while still providing agility are important.
  • Will you secure data at rest?  Will you use built in AWS services for encryption keys, KMS or CloudHSM?  Or will you bring your own keys?  How will you provide column or row based encryption of your databases?  Cloud solutions need to be analyzed against the company standards to determine if you can use built in cloud encryption or decide to roll your own.
  • How will you provision your certificates for data in transit?  AWS provides the Certificate Manager service to provision SSL certificates.  Or will you continue to use your existing provider?  How will you track expiring certificates and update them?  AWS has many features for SSL including integration with their Elastic Load Balancers.
  • How will you manage your big data?  Will you scale up or out?  Are the workloads transient?  There are many options for cost optimization.  Between Spot Instances and automation, incredibility elegant solutions can be created.
  • What are your Disaster Recovery policies?  Do they need to be adjusted for the cloud?  Most likely they do.  How will you deliver your DR solutions?  Again, there are many solutions for DR in the cloud from infrastructure as code and creating automation to critical data and servers between regions.
  • What are your data retention policies?  How will you implement them in the cloud?  How will you ensure that you have met your regulatory compliance?  There are built-in solutions for data life cycles in AWS, but in many cases, it is more complicated than what is available off the shelf.
  • How will you handle OS and application licensing?  Will you use on-demand or bring your own licenses?  There is no one right answer.  ROIs needs to be calculated in many cases.
  • What is your single-sign-on (SSO) solution?  How will it integrate into the cloud?  AWS does provide federated authentication all of its services.

This is a long list of questions.  It isn’t intended to scare you away from the cloud, but rather to embrace it correctly.  No two enterprises are identical, but most share many of the same challenges.  Starting with this list of questions may help you identify many of the successful approaches to a migration journey to the much-promised benefits of the cloud.

-Ian Willoughby, Principal Architect

Facebooktwittergoogle_pluslinkedinmailrss

AWS Lambda Scheduled Event Function Deep Dive: Part 4

Registering the Lambda Function

Now that we’ve created the Lambda function IAM role, for the sake of simplicity, let’s assume the function itself is already written and packaged sitting in an S3 bucket that your Lambda function Role will have access to.  For this example, let’s assume our S3 URL for this backup function is: s3://my-awesome-lambda-functions/ec2_backup_manager.zip

The IAM policy rights required for creating the Lambda function and the Scheduled Event are:

  • lambda:CreateFunction
  • lambda:AddPermission
  • events:PutRule
  • events:PutTargets

Registering the Lambda Function and Scheduled Event via the AWS Lambda API using Python and boto3

Note: Like the previous boto3 example, you must either have your AWS credentials defined in a supported location (e.g. ENV variables, ~/.boto, ~/.aws/configuration, EC2 meta-data) or you must specify credentials when creating your boto3 client (or alternatively ‘session’).  The User/Role associated with the AWS credentials must also have the necessary rights, defined by policy, to perform the required operations against the AWS Lambda API.

A few notes on the creation_function function arguments
  • The Runtime is the language and version being used by our function (i.e. python2.7)
  • The Role is the ARN of the role we created in the previous exercise
  • The Handler is the function within your code that Lambda calls to begin execution. For our python example the value is: {lambda_function_name}.{handler_function_name}
  • The Code is either a base64 encoded zip file or the s3 bucket and key
An important Warning about IAM role replication delay

In the previous step we created an IAM role that we reference in the code below when creating our Lambda function.  Since IAM is an AWS region independent service it takes some time – usually less than a minute – for create/update/delete actions against the IAM API to replicate to all AWS regions.  This means that when we perform the create_function operation in the script below, we may very well receive an error from the Lambda API stating that the role we specified does not exist.  This is especially true when scripting an operation where the create_function operation happens only milliseconds after the create_role function.  Since there is really no way of querying the Lambda API to see if the role is available in the region yet prior to creating the function, the best option is to use exception handling to catch the specific error where the role does not yet exist and wrap that exception handler in a retry loop with an exponential back-off algorithm (though sleeping for 10-15 seconds will work just fine too).

Let’s pick up in the python script where we previously left off in the last example:

Img_6

Registering the Lambda Function and Scheduled Event using AWS CloudFormation Template

Note: The S3Bucket MUST exist in the same region you are launching your CFT stack in.  To support multi-region templates I generally will have a Lambda S3 bucket for each Lambda region and append a .{REGION_NAME} suffix to the bucket name (e.g. my-awesome-lambda-functions.us-west-2).  Since CloudFormation provides us with a psuedo-parameter of the region you are launching the stack in (AWS::Region), you can utilize that to ensure you are referencing the appropriate bucket (see my example below).

The following block of JSON can be used in conjunction with our previous CloudFormation snippet by being added to the template’s “Resource” section to create the Lambda function, CloudWatch Event, and Input:

Img_7

If you implement your Lambda functions using either of the two examples provided you should be able to reliably create, , manage, automate and scale them to whatever extent and whatever schedule you need.  And the best part is you will only be charged for the ACTUAL compute time you use, in 100ms increments.

Now go have fun, automate everything you possibly can, and save your organization thousands of dollars in wasted compute costs!  Oh, and conserve a bunch of energy while you’re at it.

-Ryan Kennedy, Sr Cloud Consultant

Facebooktwittergoogle_pluslinkedinmailrss

AWS Lambda Scheduled Event Function Deep Dive: Part 3

Creating the Lambda Function IAM Role

In our last article, we looked at how to set up scheduled events using either the API (python and boto3) or CloudFormation, including the required Trusted Entity Policy Document and IAM Role. This role and policy can be created manually using the AWS web console (not recommended), scripted using the IAM API (e.g. Python and boto3), or using a templating tool (e.g. CloudFormation. Hashicorp’s Terraform).  For this exercise we will cover both the scripted and the template tool approaches.

The IAM policy rights required for creating the Lambda function role and policy are:

  • iam:CreateRole
  • iam:PutRolePolicy

Creating the Lambda IAM role via the IAM API using Python and boto3

Note: For the following example you must either have your AWS credentials defined in a supported location (e.g. ENV variables, ~/.boto, ~/.aws/configuration, EC2 meta-data) or you must specify credentials when creating your boto3 client (or alternatively ‘session’).  The User/Role associated with the AWS credentials must also have the necessary rights, defined by policy, to perform the required operations against AWS IAM API.

The following python script will produce our desired IAM role and policy:

Img_4

That will create the necessary lambda function role and its inline policy.

Creating the Lambda IAM role using AWS CloudFormation Template

The following block of JSON can be added to a CloudFormation template’s “Resource” section to create the Lambda function role and its inline policy:

Img_5

Visit us next week for the final segment of this blog series – Registering the Lambda Function.

-Ryan Kennedy, Sr Cloud Consultant

Facebooktwittergoogle_pluslinkedinmailrss