1-888-317-7920 info@2ndwatch.com

Migrating Terraform Remote State to a “Backend” in Terraform v.0.9+

(AKA: Where the heck did ‘terraform remote config’ go?!!!)

If you are working with cloud-based architectures or working in a DevOps shop, you’ve no doubt been managing your infrastructure as code. It’s also likely that you are familiar with tools like Amazon CloudFormation and Terraform for defining and building your cloud architecture and infrastructure. For a good comparison on Amazon CloudFormation and Terraform check out Coin Graham’s blog on the matter: AWS CFT vs. Terraform: Advantages and Disadvantages.

If you are already familiar with Terraform, then you may have encountered a recent change to the way remote state is handled, starting with Terraform v0.9. Continue reading to find out more about migrating Terraform Remote State to a “Backend” in Terraform v.0.9+.

First off… if you are unfamiliar with what remote state is check out this page.

Remote state is a big ol’ blob of JSON that stores the configuration details and state of your Terraform configuration and infrastructure that has actually been deployed. This is pretty dang important if you ever plan on changing your environment (which is “likely”, to put it lightly) and especially important if you want to have more than one person managing/editing/maintaining the infrastructure, or if you have even the most basic rationale as it pertains to backup and recovery.

Terraform supports almost a dozen backend types (as of this writing) including:

  • Artifactory
  • Azure
  • Consul
  • Etcd
  • Gcs
  • Http
  • Manta
  • S3
  • Swift
  • Terraform Enterprise (AKA: Atlas)


Why not just keep the Terraform state in the same git repo I keep the Terraform code in?

You also don’t want to store the state file in a code repository because it may contain sensitive information like DB passwords, or simply because the state is prone to frequent changes and it might be easy to forget to push those changes to your git repo any time you run Terraform.

So, what happened to terraform remote anyway?

If you’re like me, you probably run the la version of HashiCorp’s Terraform tool as soon as it is available (we actually have a hook in our team Slack channel that notifies us when a new version is released). With the release of Terraform v.0.9 last month, we were endowed with the usual heaping helping of excellent new features and bug-fixes we’ve come to expect from the folks at HashiCorp, but were also met with an unexpected change in the way remote state is handled.

Unless you are religious about reading the release notes, you may have missed an important change in v.0.9 around the remote state. While the release notes don’t specifically call out the removal (not even deprecation, but FULL removal) of the prior method (e.g. with Terraform remote config, the Upgrade Guide specifically calls out the process in migrating from the legacy method to the new method of managing remote state). More specifically, they provide a link to a guide for migrating from the legacy remote state config to the new backend system. The steps are pretty straightforward and the new approach is much improved over the prior method for managing remote state. So, while the change is good, a deprecation warning in v.0.8 would have been much appreciated. At least it is still backwards compatible with the legacy remote state files (up to version 0.10), making the migration process much less painful.

Prior to v.0.9, you may have been managing your Terraform remote state in an S3 bucket utilizing the Terraform remote config command. You could provide arguments like: backend and backend-config to configure things like the S3 region, bucket, and key where you wanted to store your remote state. Most often, this looked like a shell script in the root directory of your Terraform directory that you ran whenever you wanted to initialize or configure your backend for that project.

Something like…

Terraform Legacy Remote S3 Backend Configuration Example
export AWS_PROFILE=myprofile
terraform remote config \
--backend=s3 \
--backend-config="bucket=my-tfstates" \
--backend-config="key=projectX.tfstate" \

This was a bit clunky but functional. Regardless, it was rather annoying having some configuration elements outside of the normal terraform config (*.tf) files.

Along came Terraform v.0.9

The introduction of Terraform v.0.9 with its new fangled “Backends” makes things much more seamless and transparent.  Now we can replicate that same remote state backend configuration with a Backend Resource in a Terraform configuration like so:

Terraform S3 Backend Configuration Example
terraform {
  backend "s3" {
    bucket = "my-tfstates"
    key    = "projectX.tfstate"
    region = "us-west-2"
A Migration Example

So, using our examples above let’s walk through the process of migrating from a legacy “remote config” to a “backend”.  Detailed instructions for the following can be found here.

1. (Prior to upgrading to Terraform v.0.9+) Pull remote config with pre v.0.9 Terraform

> terraform remote pull
Local and remote state in sync

2. Backup your terraform.tfstate file

> cp .terraform/terraform.tfstate 

3. Upgrade Terraform to v.0.9+

4. Configure the new backend

terraform {
  backend "s3" {
    bucket = "my-tfstates"
    key    = "projectX.tfstate"
    region = "us-west-2"

5. Run Terraform init

> terraform init
Downloading modules (if any)...
Initializing the backend...
New backend configuration detected with legacy remote state!
Terraform has detected that you're attempting to configure a new backend.
At the same time, legacy remote state configuration was found. Terraform will
first configure the new backend, and then ask if you'd like to migrate
your remote state to the new backend.
Do you want to copy the legacy remote state from "s3"?
  Terraform can copy the existing state in your legacy remote state
  backend to your newly configured backend. Please answer "yes" or "no".
  Enter a value: no
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.

6. Verify the new state is copacetic

> terraform plan
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, Terraform
doesn't need to do anything.

7.  Commit and push

In closing…

Managing your infrastructure as code isn’t rocket science, but it also isn’t trivial.  Having a solid understanding of cloud architectures, the Well Architected Framework, and DevOps best practices can greatly impact the success you have.  A lot goes into architecting and engineering solutions in a way that maximizes your business value, application reliability, agility, velocity, and key differentiators.  This can be a daunting task, but it doesn’t have to be!  2nd Watch has the people, processes, and tools to make managing your cloud infrastructure as code a breeze! Contact us today to find out how.


— Ryan Kennedy, Principal Cloud Automation Architect, 2nd Watch



Ransomware Attack Leaves Some Companies WannaCrying Over Technical Debt

The outbreak of a virulent strain of ransomware, alternately known as WannaCry or WannaCrypt, is finally winding down. A form of malware, the WannaCry attack exploited certain vulnerabilities in Microsoft Windows and infected hundreds of thousands of Windows computers worldwide.  As the dust begins to settle, the conversation inevitably turns to what could have been done to prevent it.

The first observation is that most organizations could have been protected simply by following best practices—most notably, the regular installation of known security and critical patches that help to minimize vulnerabilities. WannaCry was not an exotic “zero day” incident. The patch for the underlying vulnerabilities (MS17-010) has been available since March. Companies like 2nd Watch maintain a regular patch schedule to protect their systems from these and similar attacks. It should be noted that due to the prolific nature of this malware and the active attack vectors, 2nd Watch is requiring that all Windows systems must be patched by 5/31/2017.

Other best practices include:

  • Maintaining support contracts for out-of-date operating systems
  • Enabling firewalls, in addition to intrusion detection and prevention systems
  • Proactively monitoring and validating traffic going in and out of the network
  • Implementing security mechanisms for other points of entry attackers can use, such as email and websites
  • Deploying application control to prevent suspicious files from executing in addition to behavior monitoring that can thwart unwanted modifications to the system
  • Employing data categorization and network segmentation to mitigate further exposure and damage to data
  • Backing up important data. This is the single, most effective way of combating ransomware infection. However, organizations should ensure that backups are appropriately protected or stored off-line so that attackers can’t delete them.


The importance of regularly scheduled patching and keeping systems up-to-date cannot be overemphasized. It may not be sexy, but it is highly effective.

All of these recommendations seem simple enough, but why did the outbreak spread so quickly if the vulnerabilities were known and patches were readily available? It spread because the patches were released for currently supported systems, but the vulnerability has been present in all versions of Windows dating back to Windows XP. For these older systems – no longer supported by Microsoft but still widely used – the patches weren’t there in the first place. One of the highest profile victims, Britain’s National Health Service, discovered that 90 percent of NHS trusts run at least one Windows XP device, an operating system Microsoft first introduced in 2001 and hasn’t supported since 2014. In fact, it was only because of the high-profile nature of this malware that Microsoft took the rare step this week of publishing a patch for Windows XP, Windows Server 2003 and Windows 8.

This brings us to the challenging topic of “technical debt”—the extra cost and effort to continue using older technology. The WannaCry/WannaCrypt outbreak is simply the most recent teachable moment about those costs.

A big benefit of moving to cloud computing is its ability to help rid one’s organization of technical debt. By migrating workloads into the cloud, and even better, by evolving those workloads into modern, cloud-native architectures, the issue of supporting older servers and operating systems is minimized. As Gartner pointed out in the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide, through 2018, the cloud managed service market will remain relatively immature, and more than 75% of fully successful implementations will be delivered by highly skilled, forward-looking, boutique managed service providers with a cloud-native, DevOps-centric service delivery approach, like 2nd Watch.  A free download of the report can be found here.

Partners like 2nd Watch can also help reduce your overall management cost by tailoring solutions to manage your infrastructure in the cloud. The best practices mentioned above can be automated in many environments– regular patching, resource isolation, traffic monitoring, etc. – are all done for you so you can focus on your business.

Even more important, companies like 2nd Watch help ensure the ongoing optimization of your workloads, both from a cost and a performance point of view. The life-cycle of optimization and modernization of your cloud environments is perhaps the single grea mechanism to ensure that you never take on and retain high levels of technical debt.


-John Lawler, Sr Product Manager


2nd Watch Meets Customer Demands and Prepares for Continued Growth and Acceleration with Amazon Aurora

The Product Development team at 2nd Watch is responsible for many technology environments that support our software and solutions—and ultimately, our customers. These environments need to be easily built, maintained, and kept in sync. In 2016, 2nd Watch performed an analysis on the amount of AWS billing data that we had collected and the number of payer accounts we had processed over the course of the previous year.  Our analysis showed that these measurements had more than tripled from 2015 and projections showed that we will continue to grow at the same, rapid pace with AWS usage and client onboarding increasing daily. Knowing that the storage of data is critical for many systems, our Product Development team underwent an evaluation of the database architecture used to house our company’s billing data—a single SQL Server instance running a Web edition of SQL Server with the maximum number of EBS volumes attached.

During the evaluation, areas such as performance, scaling, availability, maintenance and cost were considered and deemed most important for future success. The evaluation revealed that our current billing database architecture could not meet the criteria laid out to keep pace with growth.  Considerations were made to increase the storage capacity by one VM to the maximum family size or potentially upgrade to MS SQL Enterprise. In either scenario, the cost of the MS SQL instance doubled.  The only option for scaling without substantially increasing our cost was to scale vertically, however, to do so would result in diminishing performance gains. Maintenance of the database had become a full-time job that was increasingly difficult to manage.

Ultimately, we chose the cloud-native solution, Amazon Aurora, for its scalability, low-risk, easy-to-use technology.  Amazon Aurora is a MySQL relational database that provides speed and reliability while being delivered at a lower cost. It offers greater than 99.99% availability and can store up to 64TB of data. Aurora is self-healing and fully managed, which, along with the other key features, made Amazon Aurora an easy choice as we continue to meet the AWS billing usage demands of our customers and prepare for future growth.

The conversion from MS SQL to Amazon Aurora was successfully completed in early 2017 and, with the benefits and features that Amazon Aurora offers, many gains were made in multiple areas. Product Development can now reduce the complexity of database schemas because of the way Aurora stores data. For example, a database with one hundred tables and hundreds of stored procedures was reduced to one table with 10 stored procedures. Gains were made in performance as well. The billing system produces thousands of queries per minute and Amazon Aurora handles the load with the ability to scale to accommodate the increasing number of queries. Maintenance of the Amazon Aurora system is now virtually managed. Tasks such as database backups are automated without the complicated task of managing disks. Additionally, data is copied across six replicas in three availability zones which ensures availability and durability.

With Amazon Aurora, every environment is now easily built and setup using Terraform. All infrastructure is automatically setup—from the web tier to the database tier—with Amazon CloudWatch logs to alert the company when issues occur. Data can easily be imported using automated processes and even anonymized if there is sensitive data or the environment is used to demo to our customers. With the conversion of our database architecture from a single MS SQL Service instance to Amazon Aurora, our Product Development team can now focus on accelerating development instead of maintaining its data storage system.






Budgets: The Simple Way to Encourage Cloud Cost Accountability

Controlling costs is one of the grea challenges facing IT and Finance managers today.  The cloud, by nature, makes it easy to spin up new environments and resources that can cost thousands of dollars each month. And, while there are many ways to help control costs, one of the simplest and most effective methods is to set and manage cloud spend-to-budget. While most enterprise budgets are set at a business unit or department, for cloud spend, mapping that budget down to the workload can establish strong accountability within the organization.

One popular method that workload owners use to manage spend is to track month-over-month cost variances.  However, if costs do not drastically increase from one month to another, this method does very little to control spend. It is only until a department is faced with budget issues that workload owners work diligently to reduce costs.  That’s because, when budgets are set for each workload, owners become more aware of how their cloud spend impacts the company financials and tend to more carefully manage their costs.

In this post, we provide four easy steps to help you manage workload spend-to-budget effectively.

Step 1: Group Your Cloud Resources by Workload and Environment

Use a financial management tool such as 2nd Watch CMP Finance Manager to group your cloud resources by workload and its environment (Test, Dev, Prod).  This can easily be accomplished by creating a standard where each workload/environment has its own cloud account, or by using tags to identify the resources associated with each workload. If using tags, use a tag for the workload name such as workload_name: and a tag for the environment such as environment:. More tagging best practices can be found here.

Step 2: Group Your Workloads and Environments by Business Group

Once your resources are grouped by workload/environment, CMP Finance Manager will allow you to organize your workload/environments into business groups. For example:

a. Business Group 1
i. Workload A
1. Workload A Dev
2. Workload A Test
3. Workload A Prod
ii. Workload B
1. Workload B Dev
2. Workload B Test
3. Workload B Prod
b. Business Group 2
i. Workload C
1. Workload C Dev
2. Workload C Test
3. Workload C Prod
ii. Workload D
1. Workload D Dev
2. Workload D Test
3. Workload D Prod

Step 3: Set Budgets

At this point, you are ready to set up budgets for each of your workloads (each workload/environment and the total workload as you may have different owners). We suggest you set annual budgets aligned to your fiscal year and have the tool you use programmatically recalculate the budget at the end of each month with the amount remaining in your annual budget.

Step 4: Create Alerts

The final step is to create alerts to notify owners and yourself when workloads either have exceeded or are on track to exceed the current month or annual budget amount.  Here are some budget notifications we recommend:

  1. ME forecast exceeds month budget
  2. MTD spend exceeds MTD budget
  3. MTD spend exceeds month budget
  4. Daily spend exceed daily budget
  5. YE forecast exceeds year budget
  6. YTD spend exceeds YE budget

Once alerts are set, owners can make timely decisions regarding spend.  The owner can now proactively shift to spot instances, purchase reserved instances, change instance sizes, park the environment when not in use, or even refactor the application to take advantage of cloud native services like AWS Lambda.

Our experience has shown that enterprises that diligently set up and manage spend-to-budget by workload have more control of their costs and ultimately, spend less on their cloud environments without sacrificing user experience.


–Timothy Hill, Senior Product Manager, 2nd Watch


Managing AWS Billing

Without a doubt, AWS has fundamentally changed how modern enterprises deploy IT infrastructure.  Their services are flexible, cost effective, scalable, secure and reliable. And while moving from on-premise data centers to the cloud is, in most cases, the smart move; once there managing your costs becomes much more complex.

On-premise costs are straight forward, enterprises purchase servers and amortize their costs over the expected life.  Shared services such as internet access, racks, power and cooling are proportionally allocated to the cost of each server. AWS on the other hand, invoices each usage type separately.  For example, if you are running a basic EC2 instance, you will not only be charged for the EC2 box usage but also the data transfer, EBS Storage and associated snapshots. You could end up with as many as 13 line items of cost for a single EC2.

Example: Pricing line items for a single c4.xlarge Linux virtual machine running in the US East Region (Click on image to view larger)

Linux Example_Managing AWS Billing

When examining the composition of various workload types the numbers of line items to manage will vary. A traditional VM-based workload may have 50 cost line items for every $1,000 of spend while an agile, cloud-native workload may have as many as 500 per $1,000 and a dynamic workload leveraging spot instances may have upwards of 1,200 per $1,000. This “parts bin” approach to pricing makes the job of cost account challenging.

To address this complexity and enable accurate cost accounting of your cloud costs; we recommend creating a business-relevant financial tagging schema to organize your resources and associated cost line items based on your specific financial accounting structure.

Here are some recommended financial management tags you should consider (Click on image to view larger):

Financial Management Tags_AWS Billing Management

AWS Tagging data integrity is extremely important in ensuring the quality of the information it provides and is directly dependent upon the rigor applied in adopting a systematic and disciplined approach to AWS Tagging.

Financial Management Tagging – Best Practices

  • Create a framework or standard for your enterprise that outlines required tag names, tag formatting rules, and governance of tags.
  • Tags should be enforced and automated at startup of the resource via Cloud Formation templates or other infrastructure as code tools, such as Terraform, to ensure cost accounting details are captures from time of launch.
    • NOTE:  Tags are point in time based.  If a resource is launched without being tagged and then tagged sometime in the future, all hours the resource ran prior to being tagged will not be included in tag reports in the AWS console.
  • Manually creating tags and associated values is strongly discouraged as it leads to miss-tagged and untagged resources and in-accurate cost accounting
  • Select all upper case or all lower-case keys and values to avoid discrepancies with capitalization.
    • NOTE: “Production” and “production” are considered two different tag names or values.
  • Monitor resources with AWS Config Rules and alert for newly created resources that are not tagged

Once your tagging schema is created, automation is in place to tag resources during startup and alerts are set up to ensure tagging is managed, you can accurately to view, track and report your cost and usage using any of your tagging dimensions.

Financial Management Reporting – Best Practices

  • Using your tagging schema, group your resources by workload.
  • Apply Reserved Instance discounts to the workloads you intended them to be for.
    • NOTE: 2nd Watch’s CMP Finance Manager tool converts reserved instances into resources so that you can add them to the workload they were intended for.
  • Organize your groups to match your specific multi-level financial reporting structure.
  • Managed shared resources
    • Create groups for shared resources. If you have resources that are shared across multiple workloads such as a database used my multiple applications or virtual machines with more than one applications running on it, create groups to capture these costs and allocate them proportionally to the applications using them.
  • Manage un-taggable resources
    • Create a group for un-taggable resources. Some AWS resources are not taggable and should be grouped together and their associated costs proportionally allocated to all applications.
  • Manage spend to budget
    • Create budgets and budget alerts for each group to ensure you stay in budget throughout the year.
    • Key alerts
      • Forecasted month end cost exceeds alert threshold
      • MTD cost is over alert threshold
      • Forecasted year end cost exceeds alert threshold
      • YTD cost is over alert threshold
    • Sign up to receive monthly cost and usage reports for integration into your internal cost accounting system.
      • Cost by application, environment, business unit etc.


Even though AWS’ “parts bin” approach to pricing is complicated, following these guidelines will help ensure accurate cost accounting of your cloud spend.


–Timothy Hill, Senior Product Manager, 2nd Watch




How to Upgrade Your Chances of Passing any AWS Certification Exam

My background.

I’m sure you’ve read a thousand of blog articles on passing the exams, but most of them focus on WHAT to study.  This is going to be a little different.  I want to focus on the strategies for taking the exam and how to maximize your chances of passing.  How do I know about this?  Because in my past life I was a mathematics teacher in college, middle school, and high school.  I’ve written s for a living and helped hundreds of students take s and strategize for maximizing scores.  Further, I’ve taken and passed all 5 (non-beta) AWS exams in addition to the VCP4 (VMWare 4 Certification) from my “on-prem” days.  Before I get started, please note that all examples below are made up.

The s.

A good place to start is understanding the AWS ing strategy.  These are well designed s.  Yes, they have some questions that could be worded better, and their “” exams leave much to be desired, but those are small nitpicks.  Why do I say they’re well designed?  The biggest reason is that they utilize long-form scenario based questions.  I have yet to encounter a question like “What is the max networking throughput of a t2.micro?”  These types of questions are quite typical in other certification exams.  The VCP exam was a litany of “How many VMs per host?” type questions.  Not only is that type of information generally useless, it’s also usually invalid by the time the is live.  You’re left memorizing old maximums that don’t apply.  Long-form scenario based questions get past the rote memorization of maximums and get you to a real judgement on the understanding of the platform. Further, the questions are interesting and engaging.  In many cases, I find the questions sounding like something I’ve experienced with a client.  Lastly, scenario questions make it harder to cheat.  Memorizing 500 vms per host is easy.  Memorizing an entire story about a three-tier website and the solution with reduced redundancy S3 is hard.  The harder it is for people to cheat, the more authenticity and respect the certification will retain.

Breaking down the questions.

Understanding the type of questions AWS uses in their certification exams helps us to understand how to prepare for the exams.  The questions generally come in two major and two minor parts:

The Question (which I break into two parts)

1. The scenario.  This is where the story is told regarding the setup of the problem.

2. “The kicker”.  This is the crux of the problem and the key to the right answer.

The Answers (which I also break into two parts)

3. The answer instructions.  This tells you how many of the given answers to choose.

4. The answers.  The potential answers to the questions.  Sometimes more than one is right.

The scenario is the first part of every question.  This usually involves a setup for a problem.  A company recently had an outage and wants to improve their resiliency, they want to be more agile, they want to redeploy their app, etc.  These can be a bit wordy so it’s best to skim this portion of the question.  It’s great to have a good idea of the problem, but this isn’t generally the most critical piece of information.  Skimming through the scenario you’ll finally come to the kicker.

The kicker is the key piece of the question that defines the problem.  Something like “if architecting for highest performance” or “in order to save the most money” etc.  This line defines the most important aspect of the answers you need to look for.  Next comes the answers.

The answer instructions tell you how many to choose out of the available.  This would be VERY important except that the WEBESSOR software literally won’t let you mess this up.  Just keep in mind that you cannot move on to the next question if you have too many or two few questions selected.

Finally, the actual choices for answers.  Picking the right answers is the whole point, but also important is the formatting of the answers.  One of the things you’ll notice with long form answers is that they commonly follow a pattern (like 1A, 1B, 2A, 2B).  For instance, the answers might follow a pattern like this:

A: Use EBS for saving logs and S3 for long term data access

B: Use EBS for saving logs and Redshift for long term data access

C: Use CloudWatch Logs for saving logs and S3 for long term data access

B: Use CloudWatch Logs for saving logs and Redshift for long term data access

Now, even if you had no idea what the scenario is, if the kicker is “How can the company architect their logging solution for lowest cost?” you can simply choose the answer that gets you the best cost savings (C).  And many of the questions are like this.  You can determine the right answer by looking at the pattern of the answers and comparing that to the kicker.  Further, you can generally eliminate two answers right off the bat.  Note that while this is a very common pattern, there are other patterns the answer might follow.  What’s important is to notice the pattern and use it to eliminate wrong answers.

The power of elimination.

Consider this.  Let’s say that you’re taking a with 80 questions and the passing score is 70% (AWS does not publish the passing score; this is just a hypothetical example).  In this example, you’d need to know the answers to 56 questions to pass.  Using the process of elimination, you can reduce the number of questions you know are right to 40 and STILL likely pass.
(The math: Subtract 40 correct questions from the original 80 and you’re left with 40 unknown questions.  If you can eliminate 2 of the 4 answers on these questions, you can coin flip the remaining 2 answers.  Statistically, you have an 87% chance at guessing better than 16 out of 40 questions with two answers.  http://www.wolframalpha.com/input/?i=prob+x+%3E+16+if+x+is+binomial+distribution+n%3D40,+p%3D0.5&x=0&y=0)

“But Coin”, you say, “how can I be guaranteed to eliminate 2 of every 4 answers”?  While you likely won’t always be able to eliminate the wrong answers, the trick is to study the kickers.  What I mean is that in understanding each AWS service, you need to be able to articulate “How can I make it more X?”  More cost effective, more performant, more highly available, more resilient, etc.  Each certificate exam will have its set of kickers.  So, when you’re studying for each of the exams, pay special attention to how the service can be customized to meet a goal and which services meet which goals.

S3 for example:

Cheaper: lifecycle policies, reduced redundancy, IA, Glacier

Resilient: versioning, s3 replication, regional replication

Performant: CloudFront (for web), regional S3 endpoints, hashing key names

Secure: VPC endpoints, encryption at rest

Hopefully you can see how a kicker that asked “How can the company make S3 more performant?” would allow you to eliminate 1 or more answers with “Glacier” as the strategy.  Further, having a deep understanding of these concepts for AWS services will allow you to spot answers that don’t exist.  If a company wants to “design for a more affordable BI solution?” which is better:

A. Implement reserved instances for Amazon EMR

B. Utilize spot instances for Redshift Nodes

The answer is A.  You cannot (currently) utilize spot instance for Redshift.  Understanding how each service can be customized will allow you to spot these false answers and eliminate them quickly.

How to study and the practice exam.

You’ve probably seen the same advice everywhere.  Go through a video session with acloud.guru or LinuxAcademy.com and read the whitepapers.  This is certainly effective, but while you’re consuming that information, keep in the back of your mind that you need to understand the kickers for each service as it relates to the exam.  Once you have gone through the study materials, I highly recommend taking the practice exam.  Don’t worry, you’ll likely fail it.  You’re not taking it to pass the , you’re taking it to get a feel for the formatting and pacing.  These are very important to your success because if you run out of time in the real exam, you’re chances of passing sink fast.  Unfortunately, the AWS practice exam is not a good representation of how you’ll perform on the actual exam.  Your main goal is to feel comfortable with your time, work on identifying the kicker and eliminating wrong answers.  Think of this like interviewing for the job you don’t want just for practice.  If you score well, that’s even better.

Taking the and last tricks.

Finally, here are several good recommendations around the logistics of taking the exam.

1.     You should take the exam during YOUR peak time.  That’s not the same for everyone.  Schedule it for the time of day you’re most alert.  If you’re not a morning person, don’t take the at 8am.

2.     If you come to a question and you’re lost or confused, immediately mark it for review and skip it.  If you have time at the end, you can review it and try to eliminate some answers and make your best guess.

3.     If you’ve picked an answer for a question, but you’re not sure, mark it for review and move on.  The prevailing wisdom that “your first guess is correct” is not supported by any studies (that I can find), but running out of time because you labored over a question is a losing strategy.  If there’s time at the end, go back and review your answers.

4.     Lastly, remember this is an AWS certification exam, so answers that suggest you use non-AWS solutions should be viewed very skeptically.


Hopefully this will provide you with an alternate view of how to effectively study and take the AWS exams.  As I mentioned before, these s are very well done and honestly a lot of fun for me.  Try to enjoy it and good luck.