In this post, we’ll go over a complete workflow for continuous integration (CI) and continuous delivery (CD) for infrastructure as code (IaC) with just 2 tools: Terraform, and Atlantis.
What is Terraform?
So what is Terraform? According to the Terraform website:
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
In practice, this means that Terraform allows you to declare what you want your infrastructure to look like – in any cloud provider – and will automatically determine the changes necessary to make it so. Because of its simple syntax and cross-cloud compatibility, it’s 2nd Watch’s choice for infrastructure as code.
Pain You May Be Experiencing Working With Terraform
When you have multiple collaborators (individ
uals, teams, etc.) working on a Terraform codebase, some common problems are likely to emerge:
- Enforcing peer review becomes difficult. In any codebase, you’ll want to ensure that your code is peer reviewed in order to ensure better quality in accordance with The Second Way of DevOps: Feedback. The role of peer review in IaC codebases is even more important. IaC is a powerful tool, but that tool has a double-edge – we are clearly more productive for using it, but that increased productivity also means that a simple typo could potentially cause a major issue with production infrastructure. In order to minimize the potential for bad code to be deployed, you should require peer review on all proposed changes to a codebase (e.g. GitHub Pull Requests with at least one reviewer required). Terraform’s open source offering has no facility to enforce this rule.
- Terraform plan output is not easily integrated in code reviews. In all code reviews, you must examine the source code to ensure that your standards are followed, that the code is readable, that it’s reasonably optimized, etc. In this aspect, reviewing Terraform code is like reviewing any other code. However, Terraform code has the unique requirement that you must also examine the effect the code change will have upon your infrastructure (i.e. you must also review the output of a terraform plan command). When you potentially have multiple feature branches in the review process, it becomes critical that you are assured that the terraform plan output is what will be executed when you run terraform apply. If the state of infrastructure changes between a run of terraform plan and a run of terraform apply, the effect of this difference in state could range from inconvenient (the apply fails) to catastrophic (a significant production outage). Terraform itself offers locking capabilities but does not provide an easy way to integrate locking into a peer review process in its open source product.
- Too many sets of privileged credentials. Highly-privileged credentials are often required to perform Terraform actions, and the greater the number principals you have with privileged access, the higher your attack surface area becomes. Therefore, from a security standpoint, we’d like to have fewer sets of admin credentials which can potentially be compromised.
What is Atlantis?
And what is Atlantis? Atlantis is an open source tool that allows safe collaboration on Terraform projects by making sure that proposed changes are reviewed and that the proposed change is the actual change which will be executed on your infrastructure. Atlantis is compatible (at the time of writing) with GitHub and Gitlab, so if you’re not using either of these Git hosting systems, you won’t be able to use Atlantis.
How Atlantis Works With Terraform
Atlantis is deployed as a single binary executable with no system-wide dependencies. An operator adds a GitHub or GitLab token for a repository containing Terraform code. The Atlantis installation process then adds hooks to the repository which allows communication to the Atlantis server during the Pull Request process.
You can run Atlantis in a container or a small virtual machine – the only requirement is that the Terraform instance can communicate with both your version control (e.g. GitHub) and infrastructure (e.g. AWS) you’re changing. Once Atlantis is configured for a repository, the typical workflow is:
- A developer creates a feature branch in git, makes some changes, and creates a Pull Request (GitHub) or Merge Request (GitLab).
- The developer enters atlantis plan in a PR comment.
- Via the installed web hooks, Atlantis locally runs terraform plan. If there are no other Pull Requests in progress, Atlantis adds the resulting plan as a comment to the Merge Request.
- If there are other Pull Requests in progress, the command fails because we can’t ensure that the plan will be valid once applied.
- The developer ensures the plan looks good and adds reviewers to the Merge Request.
- Once the PR has been approved, the developer enters atlantis apply in a PR comment. This will trigger Atlantis to run terraform apply and the changes will be deployed to your infrastructure.
- The command will fail if the Merge Request has not been approved.
The following sequence diagram illustrates the sequence of actions described above:
Atlantis sequence diagram
We can see how our pain points in Terraform collaboration are addressed by Atlantis:
- In order to enforce code review, you can launch Atlantis with the –require approvals flag: https://github.com/runatlantis/atlantis#approvals
- In order to ensure that your terraform plan accurately reflects the change to your infrastructure that will be made when you run terraform apply, Atlantis performs locking on a project or workspace basis: https://github.com/runatlantis/atlantis#locking
- In order to prevent creating multiple sets of privileged credentials, you can deploy Atlantis to run on an EC2 instance with a privileged IAM role in its instance profile (e.g. in AWS). In this way, all of your Terraform commands run through a single set of privileged credentials and obviate the need to distribute multiple sets of privileged credentials: https://github.com/runatlantis/atlantis#aws-credentials
You can see that with minimal additional infrastructure you can establish a safe and reliable CI/CD pipeline for your infrastructure as code, enabling you to get more done safely! To find out how you can deploy a CI/CD pipeline in less than 60 days, download our datasheet.
-Josh Kodroff, Associate Cloud Consultant
Picking up where we left off…
In my previous blog I gave a fairly high-level overview of what automated AWS account management could (or rather should) entail. This blog will drill deeper into the processes and give you some real-world code samples of what this looks like.
AWS Organizations and Linked Account Creation:
As mentioned in my last blog, AWS recently announced the general availability of AWS Organizations, allowing you to create linked or nested AWS accounts under a master account and apply policy-based management under the umbrella of the root account. It also allows for hierarchical management (up to five levels deep) of linked accounts by Organizational Units (OU). Policies can be applied at the global level, OU level, and individual account level. It is important to note that conflicting policies always defer to the parent entities permission set. Meaning an IAM user/role in account may have permissions to perform some action, but, if at the Organizations level the account, OU, or global settings deny those actions, the resulting action for the IAM resource will be denied. Likewise, the effective permissions for a resource are a union of the resource’s direct permissions assigned in IAM and the permissions that are controlled by Organizations. This means you can lock linked accounts down to do things like “only manage DNS Route53 resources” or “only manage S3 resources” using Organizations policies. Pretty nice way of segmenting off security and reducing the potential blast radius.
I am going to pick the most common denominator for my following examples… AWS CLI. Though I rarely use it for actual automation code, I figure most folks are familiar with it and it has a pretty intuitive syntax.
Step 1: Enable Organizations on your root account
Ensure that your AWS Profile environment variable is set to your desired root account AWS profile that has the necessary permissions to work with AWS Organizations. Alternatively, if you don’t want to use an environment variable, you can either ensure the default AWS Profile is the one which has permissions on your root account or you can specify the –profile argument when typing your AWS CLI commands. I’m going to use the AWS_DEFAULT_PROFILE environment variable in my examples here (output redacted).
> export AWS_DEFAULT_PROFILE=myrootacctadmin
This of course assumes you have a profile set up under your HOME dir in the .aws/credentials file named myrootacctadmin.
Minimally, this will look something like this:
aws_access_key_id = AKI?????????????????
aws_secret_access_key = somesecretaccesskey0somesecretaccesskey0
Now that we have our environment set we can get on with running the AWS CLI commands to create our organization.
Let’s be safe and make sure we don’t already have an organization created under our root account:
$ aws organizations list-roots
An error occurred (AWSOrganizationsNotInUseException) when calling the ListRoots operation: Your account is not a member of an organization.
As the error message indicates, this account is not currently a part of any organization and will need to be configured to use organizations if we want to use this as our master account and create linked accounts underneath it.
Easy enough, let’s just create our organization…
> aws organizations create-organization
Now that we have created an organization let’s try our list-roots command again to see if we get something different this time…
> aws organizations list-roots
Indeed! our myrootacctadmin account is listed as the root (i.e. master) of our entire organization. This is exactly what we wanted. Now let’s see what AWS accounts are identified as part of this organization…
> aws organizations list-accounts
"Name": "Satoshi Nakamoto",
As expected, just our root account. It looks kind of lonely there all by itself, so let’s go ahead and create a Linked account underneath it.
Step 2. Create a Linked Account under your Organization
> aws organizations create-account --email email@example.com --account-name brawndo
The actual creation of the account is not instantaneous, and the API responds to the create-account call before the new account creation is complete. While it is pretty quick to complete, unless we ensure that it is completed before performing any additional automation against it, we may receive an error from the API indicating the account is not yet ready. So prior to performing additional configuration on the new account, we need to ensure the State has reached SUCCEEDED. You will generally just loop until the State is equal to SUCCEEDED in your automation code before moving on to the next step. Also, it might be a good idea to catch failures (e.g. State == “FAILED”) and handle those gracefully. The account creation status can be achieved as follows:
> aws organizations describe-create-account-status --create-account-request-id car-0123456789abcdef0123456789abcdef
Congratulations! You’ve just enabled AWS Organizations and created your first linked account!
At this point you should have a couple of emails from AWS in the inbox of the email address used to create the new account. They are standard boiler-plate emails. One of which is a “Welcome to Amazon Web Services” email and the other tells you that your account is ready and has some “getting started” type links.
Step 3: Reset New Linked Account Root Password
Now that your linked account has been created you will need to go through the AWS Reset Root Account Password workflow to make your new account accessible from either the AWS Web Console or the AWS APIs. The recommended approach here is to reset the root account password, enable MFA, Create an IAM user with Administrator privileges, store the root account secrets in a VERY secure place, and only use them as a last resort for account access.
Here’s a shortened URL that will take you directly to the root account password reset page: http://amzn.pw/45Nxe
Step 4: (Optionally) Create Organizational Units
Let’s go through a couple of examples of Organizational Units.
- OU for only allowing S3 services
- OU for only allowing services in us-west-2 and us-east-1 regions
“What if I want to bring my existing accounts under the umbrella of Organizations?” you ask
Good news! You can invite existing AWS accounts to join your organization. Using the API you can issue an invitation to an existing account by Account ID, Email, or Organization. For the sake of simplicity, let’s use an Account ID (222222222222) for the following example (again, using the root/master account AWS profile):
> aws organizations invite-account-to-organization --target Id=222222222222,Type=ACCOUNT
"Value": "Satoshi Nakamoto",
A couple of things of note – The handshake Id is what will be required to accept the invitation on the linked account side. Notice the difference between the RequestedTimestamp (epoch 1524610827.55) and the ExpirationTimestamp (epoch 1525906827.55). 1296000 seconds. Divide that by 86400 seconds in a day and we get 15 days.
At this point you have 15 days to issue an acceptance of the invitation (aka: handshake), from the target AWS account. You could simply log in to the AWS Web Console, navigate to Organizations, and accept the invitation, but that’s not what this article is about now is it? We’re talking automation here! And, as all good DevOpsers know, we utilize security entities that employ PoLP (Principal of Least Privilege) to perform process-specific tasks.
This means we aren’t going to do something ludicrous like adding AWS Access Keys to our root account login (please don’t ever do this). Nor are we going to create an IAM User with Administrator access for this very specific task. You can either create a User or a Role in the target account to accept the handshake, although, creating a Role will require you to assume that Role using STS, which might be overkill. On the other hand, you might use a lambda function to automate the handshake in which case you most certainly would utilize an IAM Role. Either way, the following IAM Policy Document will provide the User/Role with the required permissions to accept (or delete) the invitation:
Using the AWS CLI (leveraging a profile of a User/Role with the aforementioned permissions under the existing target account), you would issue the following command to accept the invitation/handshake:
> aws organizations accept-handshake --handshake-id h-0123456789abcdef0123456789abcdef
"Value": "Satoshi Nakamoto",
The returned JSON struct is the exact same handshake struct returned by the API when we issued the invitation with one important difference. The State property is now reflecting a value of ACCEPTED.
That’s it. You’ve successfully linked an existing account into your Organization under the master billing account.
In the next installment, I will go into depth on the processes involved in automating the Account Initialization, Configuration, and Continuous Compliance.
Thanks for tuning in!
-Ryan Kennedy, Principal Cloud Automation Architect
-Craig Monson, Sr Automation Architect
-Lars Cromley, Director of Engineering
Compliance is a constant challenge today. Keeping our system images in a healthy and trusted state of compliance requires time and effort. There are millions of tools and technologies in market to help customers maintain compliance and state, so where do I start?
Amazon has built a rich set of core technologies within the Amazon Web Services console. Systems Manager is a fantastic operations management platform tool that can assist you with setting up and maintaining configuration and state management.
One of the first things we must focus on when we build out our core images in the cloud is the configuration of those images. What is the role of the image, what operating system am I going to utilize and what applications and/or core services do I need to enable, configure and maintain? In the datacenter, we call these Gold Images. The same applies in the cloud.
We define these roles for our images and we place them in different functional areas – Infrastructure, Web Services, Applications. We may have many core image templates for our enterprise workloads, by building these base images and maintaining them continuously – we set in motion a solid foundation for core security and core compliance of our cloud environment.
Amazon Systems Manager looks across my cloud environment and allows me to bring together all the key information around all my operating resources in the cloud. It allows me to centralize the gathering of all core baseline information for my resources in one place. In the past I would have had to look at my AWS CloudWatch information in one area, my AWS CloudTrail information in another area and my configuration information in yet another area. Centralizing this information in one console allows you to see the holistic state of your cloud environment baselines in one console.
AWS Systems Manager provides built-in Insight and Dashboards that allow you to look across your entire cloud environment and see into and act upon your cloud resources. AWS Systems Manager allows you to see the configuration compliance of all your resources as well as the state management and associations across your resources. It provides a rich ability to customize configuration and state management for your workloads, applications and resource types and scan and analyze to ensure those configuration and states are maintained continuously. With AWS Systems Manager you can customize and create your own compliance types to marry to your Enterprise Organizational baseline of your company’s business requirements. With that in place, I can constantly scan and analyze against these compliance baselines to ensure and maintain the operational configuration and state always.
We analyze and report on the current state and quickly determine compliance or out of compliance state centrally for our cloud services and resources. We can create base reports around our compliance position at any time, and with this knowledge, we can set in motion remediation to return our services and resources back to a compliant state and configuration.
With Amazon Systems Manager we can scan all resources for patch state, determine what patches are missing and manually, scheduled or automate the remediation of those patches to maintain patch management compliance.
Amazon Systems Manager also integrates with Chef InSpec, allowing you to leverage Chef InSpec to operate in a continuous compliance framework for your cloud resources.
On the road to compliance it is important to flex the tools and capabilities of your Cloud Provider. Amazon gives us a rich set of Systems Management capabilities across configuration, state management, patch management and remediation, as well as reporting. Amazon Systems Manager is provided at no cost to Amazon customers and will help you along your Journey to realizing continuous compliance of your cloud environment across the Amazon Cloud and the Hybrid Cloud. To learn more about using Amazon Systems Manager or your systems’ compliance, contact us.
-Peter Meister, Sr Director of Product Management
What does that even mean?
What I am talking about here is the automation of the following:
- AWS Linked Account Creation (the creation of secondary accounts under a single master account)
- Account Initialization and Configuration
- Continuous Compliance
It is commonplace for organizations to manage their AWS assets/resources across a wide range of different AWS accounts. This is nothing new, and we’ve seen some of our customers scale this into the hundreds. This has some pretty obvious implications from an operational, security, and accounting standpoint.
AWS Linked Account Creation
First there is the creation of the linked account itself, which can be a time consuming and arduous (if at least only one-time) process. Even if you have a rigid process for this, it is inevitable that some human error will introduce some drift or inconsistency at some point in time. It’s not a matter of if, just a matter of when. There is also the tracking of the root account credentials and everything that goes along with that. Looks like another process that is ripe for some sweet, sweet automation. Until very recently there was no API available for this, but AWS released a beta API to create linked accounts around a year ago that has recently gone to general availability. So score one for automation!
Account Initialization and Configuration
Now you’ve got your shiny new linked account. but for every account you manage you have to ensure that all of your base settings and resources are properly set up (e.g. AWS CloudTrail, AWS Config, IAM password policies, SAML Federation with your central AD, on and on). Not only set up, but set up in a consistent way so that you don’t have drift between accounts. Ok, so you could put together a nice CloudFormation template (CFT), Manage it in Terraform, or possibly just a homegrown set of scripts (bash+AWSCL, python, ruby, etc.). Those are all a great start, but you still need to be able to audit those resources to ensure they are what they are supposed to be. Also, you need to support the ability to push changes to those resources.
A few examples…
- IT AD Admin: The ADFS servers are updating their XML metadata doc, so we need you to go update the ADFS SAML Federation for our 37 linked accounts.
- IT Security Admin: We need to actively manage our set of IAM Roles that map to ADFS groups and their respective permissions on a regular and ongoing basis. How are we going to quickly and consistently do that across our 37 linked accounts?
- IT Security Admin: Hey, our email address for AWS CloudTrail notifications (SNS subscription) needs to be updated to use a new email address. I need you to get that updated on all of our 37 linked accounts ASAP!
And on and on it goes. Suffice it to say, there is a never-ending need to be able to make modifications across one, several, or all of your linked AWS accounts. You need an approach for handling what would normally be an unwieldy and tedious bit of guaranteed work. The more human intervention required to manage these things. the more likely we are to see inconsistencies, errors, and misses. And we’ve seen enough cautionary tales on failed security practices in the news in the past few years that I don’t need to stress the importance of getting this stuff right. Every time. All the time.
Once you have these things configured you really need a way to continually audit those resources and settings on an ongoing basis and ideally be able to automatically respond to drift events. This one is a bit trickier than the others because, while you can use tools like CloudFormation or Terraform to set up your initial settings and configurations. the resources they create can be modified afterwards outside of the tool they were created/configured with in the first place. Tools like AWS CloudTrail and AWS Config provide valuable tracking information for helping audit resources but alone don’t solve this puzzle. Especially if you are talking about managing this across a few dozen accounts. Something more robust must be employed to collect that data and do something intelligent with it.
How do I escape this sort of multi-account management nightmare you are describing?!!
I’ll be going into a deeper dive into this in my next blog, but here is a high-level overview of the architecture and accompanying tools and technologies you can put in place to pull it off.
AWS Linked Account Creation
With the somewhat recent release of the organizations API this has become a reality. As per the CreateAccount API documentation. you will need to ensure that AWS Organizations is enabled in the master account. But fear not! You probably already are. Specifically if you are already running multiple accounts under a master account, then you most certainly are. I won’t bore you with details, and AWS already has a very nice article detailing the process required to use organizations and the API to automate account creation. Pretty spiffy!
Account Initialization and Configuration
Once you have created the linked account using the CreateAccount API the next step is to apply any and all org-specific initialization and configuration to the new account to get it all ready for action. This step and the Continuous Compliance step can also be managed by the same tool if that is how you decide to architect it.
The key is that this is where we initialize the account with its base configuration. Whether you do that with custom scripting/code, CloudFormation, Terraform, or some amalgamation of those and/or other tools/services is not of paramount importance. What is important, is having a way to track those resource and their state. Make sure you keep that in mind when architecting a solution. One nice thing about CloudFormation is that the state tracking is built right into the service itself. You can easily list all resources within a CloudFormation stack and you can include stack Outputs to track any custom data you may generate or derive during the CFT stack launch.
You could do something similar with Terraform through the use of their state files, but it (non-enterprise Terraform) lacks the same API queryability that CloudFormation has built in. Also, it is less transparent to the casual onlooker in the AWS console where resources are originating from. Of course, once you query the resources you will still require a method for determining and tracking the state of those resources. But now we’re getting ahead of ourselves.
This is going to require a service that will allow you to: – Track the state of resources we care about – Audit the state of those resources automatically on an ongoing basis – Report on any configuration drift – Optionally automatically remediate drift.
Using the AWS CloudTrail and AWS Config services gives us the ability to track changes real-time and tie those changes to a specific user/role. But what about services that are not yet supported by AWS Config? In that case you may want to (as we have done) build a suite of services to handle these tasks. Resources and configurations are registered with a service that tracks their known-desired state. Another service is responsible for querying the current state of those items and raising a flag if there is drift. Potentially another service could report on those flagged out-of-compliance resources/settings. Optionally you could deploy a service that remediates drift in your desired configuration state on all out-of-compliance resources, or possibly just a subset.
At 2nd Watch we’ve actually architected and built out our own Managed Cloud specific implementation of Automated Account Creation and Continuous Compliance. If you would rather focus your energy on your business’s core competencies and not on building foundation cloud management tooling, why not come on board and let us empower you to deliver your product and drive shareholder value in the most secure, stable, and cost-effective way possible? We’ve got the tools and the people to make it happen! Contact us to learn more.
–Ryan Kennedy, Principal Cloud Automation Architect, 2nd Watch
-Craig Monson, Sr Automation Architect
-Lars Cromley, Director of Engineering
Why do it?
Alexa gets a lot of use in our house, and it is very apparent to me that the future is not a touch screen or a mouse, but voice. Creating an Alexa skill is easy to learn by watching videos and such, but actually creating the skill is a great way to understand the ins and outs of the process and what the backend systems (like AWS Lambda) are capable of.
First you need a problem
To get started, you need a problem to solve. Once you have the problem, you’ll need to think about the solution before you write a line of code. What will your skill do? You need to define the requirements. For my skill, I wanted to ask Alexa to “park my cloud” and have her stop all EC2 instances or RDS databases in my environment.
Building a solution one word at a time
Now that I’ve defined the problem and have an idea for the requirements of the solution, it’s time to start building the skill. The first thing you’ll notice is that the Alexa Skill port is not in the standard AWS portal. You need to go to developer.amazon.com/Alexa and create a developer account and sign in there. Once inside, there is a lot of good information and videos on creating Alexa skills that are worth reviewing. Click the “Create Skill” button to get started. In my example, I’m building a custom skill.
The process for building a skill is broken into major sections; Build, Test, Launch, Measure. In each one you’ll have a number of things to complete before moving on to the next section. The major areas of each section are broken down on the left-hand side of the console. On the initial dashboard you’re also presented with the “Skill builder checklist” on the right as a visual reminder of what you need to do before moving on.
This is the first area you’ll work on in the Build phase of your Alexa skill. This is setting up how your users will interact with your skill.
Invocation will setup how your users will launch your skill. For simplicity’s sake, this is often just the name of the skill. The common patterns will be “Alexa, ask [my skill] [some request],” or “Alexa, launch [my skill].” You’ll want to make sure the invocation for your skill sounds natural to a native speaker.
I think of intents as the “functions” or “methods” for my Alexa skill. There are a number of built-in intents that should always be included (Cancel, Help, Stop) as well as your custom intents that will compose the main functionality of your skill. Here my intent is called “park” since that will have the logic for parking my AWS systems. The name here will only be exposed to your own code, so it isn’t necessarily important what it is.
Utterances is your defined pattern of how people will use your skill. You’ll want to focus on natural language and normal patterns of speech for native users in your target audience. I would recommend doing some research and speaking to a diversity of people to get a good cross section of utterances for your skill. More is better.
Amazon also provides the option to use slots (variables) in your utterances. This allows your skill to do things that are dynamic in nature. When you create a variable in an utterance you also need to create a slot and give it a slot type. This is like providing a type to a variable in a programming language (Number, String, etc.) and will allow Amazon to understand what to expect when hearing the utterance. In our simple example, we don’t need any slots.
Interfaces allow you to interface your skill with other services to provide audio, display, or video options. These aren’t needed for a simple skill, so you can skip it.
Here’s where you’ll connect your Alexa skill to the endpoint you want to handle the logic for your skill. The easiest setup is to use AWS Lambda. There are lots of example Lambda blueprints using different programming languages and doing different things. Use those to get started because the json response formatting can be difficult otherwise. If you don’t have an Alexa skill id here, you’ll need to Save and Build your skill first. Then a skill id will be generated, and you can use it when configuring your Lambda triggers.
AWS Account Lambda
Assuming you already have an AWS account, you’ll want to deploy a new Lambda from a blueprint that looks somewhat similar to what you’re trying to accomplish with your skill (deployed in US-East-1). Even if nothing matches well, pick any one of them as they have the json return formatting set up so you can use it in your code. This will save you a lot of time and effort. Take a look at the information here and here for more information about how to setup and deploy Lambda for Alexa skills. You’ll want to configure your Alexa skill as the trigger for the Lambda in the configuration, and here’s where you’ll copy in your skill id from the developer console “Endpoints” area of the Build phase.
While the actual coding of the Lambda isn’t the purpose of the article, I will include a couple of highlights that are worth mentioning. Below, see the part of the code from the AWS template that would block the Lambda from being run by any Alexa skill other than my own. While the chances of this are rare, there’s no reason for my Lambda to be open to everyone. Here’s what that code looks like in Python:
if (event[‘session’][‘application’][‘applicationId’] != “amzn1.ask.skill.000000000000000000000000”):
raise ValueError(“Invalid Application ID”)
Quite simply, if the Alexa application id passed in the session doesn’t match my known Alexa skill id, then raise an error. The other piece of advice I’d give about the Lambda is to create different methods for each intent to keep the logic separated and easy to follow. Make sure you remove any response language from your code that is from the original blueprint. If your responses are inconsistent, Amazon will fail your skill (I had this happen multiple times because I borrowed from the “Color Picker” Lambda blueprint and had some generic responses left in the code). Also, you’ll want to handle your Cancel, Help, and Stop requests correctly. Lastly, as best practice in all code, add copious logging to CloudWatch so you can diagnose issues. Note the ARN of your Lambda function as you’ll need it for configuring the endpoints in the developer portal.
Once your Lambda is deployed in AWS, you can go back into the developer portal and begin testing the skill. First, put your Lambda function ARN into the endpoint configuration for your skill. Next, click over to the Test phase at the top and choose “Alexa Simulator.” You can try recording your voice on your computer microphone or typing in the request. I recommend you do both to get a sense of how Alexa will interpret what you say and respond. Note that I’ve found the actual Alexa is better at natural language processing than the test options using a microphone on my laptop. When you do a test, the console will show you the JSON input and output. You can take this INPUT pane and copy that information to build a Lambda test script on your Lambda function. If you need to do a lot of work on your Lambda, it’s a lot easier to test from there than to flip back and forth. Pay special attention to your utterances. You’ll learn quickly that your proposed utterances weren’t as natural as you thought. Make updates to the utterances and Lambda as needed and keep testing.
Now you wait. Amazon seems to have a number of automated processes that catch glaring issues, but you will likely end up with some back and forth between yourself and an Amazon employee regarding some part of your skill that needs to be updated. It took about a week to get my final approval and my skill posted.
Creating your own simple Alexa skill is a fun and easy way to get some experience creating applications that respond to voice and understand what’s possible on the platform. Good luck!
-Coin Graham, Senior Cloud Consultant