1-888-317-7920 info@2ndwatch.com

How to Upgrade Your Chances of Passing any AWS Certification Exam

My background.

I’m sure you’ve read a thousand of blog articles on passing the exams, but most of them focus on WHAT to study.  This is going to be a little different.  I want to focus on the strategies for taking the exam and how to maximize your chances of passing.  How do I know about this?  Because in my past life I was a mathematics teacher in college, middle school, and high school.  I’ve written s for a living and helped hundreds of students take s and strategize for maximizing scores.  Further, I’ve taken and passed all 5 (non-beta) AWS exams in addition to the VCP4 (VMWare 4 Certification) from my “on-prem” days.  Before I get started, please note that all examples below are made up.

The s.

A good place to start is understanding the AWS ing strategy.  These are well designed s.  Yes, they have some questions that could be worded better, and their “” exams leave much to be desired, but those are small nitpicks.  Why do I say they’re well designed?  The biggest reason is that they utilize long-form scenario based questions.  I have yet to encounter a question like “What is the max networking throughput of a t2.micro?”  These types of questions are quite typical in other certification exams.  The VCP exam was a litany of “How many VMs per host?” type questions.  Not only is that type of information generally useless, it’s also usually invalid by the time the is live.  You’re left memorizing old maximums that don’t apply.  Long-form scenario based questions get past the rote memorization of maximums and get you to a real judgement on the understanding of the platform. Further, the questions are interesting and engaging.  In many cases, I find the questions sounding like something I’ve experienced with a client.  Lastly, scenario questions make it harder to cheat.  Memorizing 500 vms per host is easy.  Memorizing an entire story about a three-tier website and the solution with reduced redundancy S3 is hard.  The harder it is for people to cheat, the more authenticity and respect the certification will retain.

Breaking down the questions.

Understanding the type of questions AWS uses in their certification exams helps us to understand how to prepare for the exams.  The questions generally come in two major and two minor parts:

The Question (which I break into two parts)

1. The scenario.  This is where the story is told regarding the setup of the problem.

2. “The kicker”.  This is the crux of the problem and the key to the right answer.

The Answers (which I also break into two parts)

3. The answer instructions.  This tells you how many of the given answers to choose.

4. The answers.  The potential answers to the questions.  Sometimes more than one is right.

The scenario is the first part of every question.  This usually involves a setup for a problem.  A company recently had an outage and wants to improve their resiliency, they want to be more agile, they want to redeploy their app, etc.  These can be a bit wordy so it’s best to skim this portion of the question.  It’s great to have a good idea of the problem, but this isn’t generally the most critical piece of information.  Skimming through the scenario you’ll finally come to the kicker.

The kicker is the key piece of the question that defines the problem.  Something like “if architecting for highest performance” or “in order to save the most money” etc.  This line defines the most important aspect of the answers you need to look for.  Next comes the answers.

The answer instructions tell you how many to choose out of the available.  This would be VERY important except that the WEBESSOR software literally won’t let you mess this up.  Just keep in mind that you cannot move on to the next question if you have too many or two few questions selected.

Finally, the actual choices for answers.  Picking the right answers is the whole point, but also important is the formatting of the answers.  One of the things you’ll notice with long form answers is that they commonly follow a pattern (like 1A, 1B, 2A, 2B).  For instance, the answers might follow a pattern like this:

A: Use EBS for saving logs and S3 for long term data access

B: Use EBS for saving logs and Redshift for long term data access

C: Use CloudWatch Logs for saving logs and S3 for long term data access

B: Use CloudWatch Logs for saving logs and Redshift for long term data access

Now, even if you had no idea what the scenario is, if the kicker is “How can the company architect their logging solution for lowest cost?” you can simply choose the answer that gets you the best cost savings (C).  And many of the questions are like this.  You can determine the right answer by looking at the pattern of the answers and comparing that to the kicker.  Further, you can generally eliminate two answers right off the bat.  Note that while this is a very common pattern, there are other patterns the answer might follow.  What’s important is to notice the pattern and use it to eliminate wrong answers.

The power of elimination.

Consider this.  Let’s say that you’re taking a with 80 questions and the passing score is 70% (AWS does not publish the passing score; this is just a hypothetical example).  In this example, you’d need to know the answers to 56 questions to pass.  Using the process of elimination, you can reduce the number of questions you know are right to 40 and STILL likely pass.
(The math: Subtract 40 correct questions from the original 80 and you’re left with 40 unknown questions.  If you can eliminate 2 of the 4 answers on these questions, you can coin flip the remaining 2 answers.  Statistically, you have an 87% chance at guessing better than 16 out of 40 questions with two answers.  http://www.wolframalpha.com/input/?i=prob+x+%3E+16+if+x+is+binomial+distribution+n%3D40,+p%3D0.5&x=0&y=0)

“But Coin”, you say, “how can I be guaranteed to eliminate 2 of every 4 answers”?  While you likely won’t always be able to eliminate the wrong answers, the trick is to study the kickers.  What I mean is that in understanding each AWS service, you need to be able to articulate “How can I make it more X?”  More cost effective, more performant, more highly available, more resilient, etc.  Each certificate exam will have its set of kickers.  So, when you’re studying for each of the exams, pay special attention to how the service can be customized to meet a goal and which services meet which goals.

S3 for example:

Cheaper: lifecycle policies, reduced redundancy, IA, Glacier

Resilient: versioning, s3 replication, regional replication

Performant: CloudFront (for web), regional S3 endpoints, hashing key names

Secure: VPC endpoints, encryption at rest

Hopefully you can see how a kicker that asked “How can the company make S3 more performant?” would allow you to eliminate 1 or more answers with “Glacier” as the strategy.  Further, having a deep understanding of these concepts for AWS services will allow you to spot answers that don’t exist.  If a company wants to “design for a more affordable BI solution?” which is better:

A. Implement reserved instances for Amazon EMR

B. Utilize spot instances for Redshift Nodes

The answer is A.  You cannot (currently) utilize spot instance for Redshift.  Understanding how each service can be customized will allow you to spot these false answers and eliminate them quickly.

How to study and the practice exam.

You’ve probably seen the same advice everywhere.  Go through a video session with acloud.guru or LinuxAcademy.com and read the whitepapers.  This is certainly effective, but while you’re consuming that information, keep in the back of your mind that you need to understand the kickers for each service as it relates to the exam.  Once you have gone through the study materials, I highly recommend taking the practice exam.  Don’t worry, you’ll likely fail it.  You’re not taking it to pass the , you’re taking it to get a feel for the formatting and pacing.  These are very important to your success because if you run out of time in the real exam, you’re chances of passing sink fast.  Unfortunately, the AWS practice exam is not a good representation of how you’ll perform on the actual exam.  Your main goal is to feel comfortable with your time, work on identifying the kicker and eliminating wrong answers.  Think of this like interviewing for the job you don’t want just for practice.  If you score well, that’s even better.

Taking the and last tricks.

Finally, here are several good recommendations around the logistics of taking the exam.

1.     You should take the exam during YOUR peak time.  That’s not the same for everyone.  Schedule it for the time of day you’re most alert.  If you’re not a morning person, don’t take the at 8am.

2.     If you come to a question and you’re lost or confused, immediately mark it for review and skip it.  If you have time at the end, you can review it and try to eliminate some answers and make your best guess.

3.     If you’ve picked an answer for a question, but you’re not sure, mark it for review and move on.  The prevailing wisdom that “your first guess is correct” is not supported by any studies (that I can find), but running out of time because you labored over a question is a losing strategy.  If there’s time at the end, go back and review your answers.

4.     Lastly, remember this is an AWS certification exam, so answers that suggest you use non-AWS solutions should be viewed very skeptically.

Conclusion.

Hopefully this will provide you with an alternate view of how to effectively study and take the AWS exams.  As I mentioned before, these s are very well done and honestly a lot of fun for me.  Try to enjoy it and good luck.

Facebooktwittergoogle_pluslinkedinmailrss

Cost Accounting for Amazon WorkSpaces

Who would have thought, back in 2014, when AWS launched Amazon WorkSpaces it would have such an impact on the virtual desktop market?  Amazon WorkSpaces—AWS’ fully managed, secure desktop computing service—allows enterprises to easily provision cloud-based virtual desktops and provide users access to the documents, applications, and resources they need from any supported device. Over these three short years, Amazon WorkSpaces has made great strides in reducing the costs related to VDI deployment, support and software packaging while improving service levels and deployment time of new applications. Amazon WorkSpaces provides the flexibility to securely work from anywhere, anytime and on any device without the cost and complexity of traditional VDI infrastructure.

However, enterprises have faced a few challenges when deploying Amazon WorkSpaces.  One of the grea challenges with wholesale deployment of Amazon WorkSpaces has been how to allocate the costs associated with thousands of instances to the various departments that are using each resource.  In 2016 AWS enabled users to tag each workspace with up to 50 tags.  While this is a step in the right direction, tagging is not included in the launch process. Instead, users have to remember to tag the instance after it is launched. This is where the process tends to break down, leaving thousands of dollars related to cloud spend either un-allocated or incorrectly allocated.

To address this drawback, it is important to create and implement two processes. The first step is pretty basic: Develop a process and train all team members responsible for launching new WorkSpaces to tag each workspace after it is launched.  The second step is to set up automation to efficiently audit and provide notifications when resources (specifically Amazon WorkSpaces) are launched without a particular tag or set of tags.  Unfortunately, with Amazon WorkSpaces you aren’t able to use the AWS Config “required-tags” rule to enforce your process policy as Config only supports a limited set of AWS resource types. (NOTE: You can check out the AWS Config Developer Guide for more on using it to enforce tag requirements on Config supported resources.) Instead, you can roll your own tag enforcement solution using AWS Lambda and CloudTrail.

This process is fairly simple. When you activate AWS CloudTrail logs, AWS will dump all API calls as JSON log files to an S3 bucket.  You can then setup a trigger on that bucket to invoke an AWS Lambda function that can scan the logs for specific events, such as Amazon WorkSpace’s “CreateWorkSpaces” method. If it finds an event, it can publish a message to an SNS topic notifying you that the resource does not have the appropriate tag.  You can even set the message up to include the creator tag that AWS adds to all new resources. This way, if you need to know who launched the instance in order to determine how to tag it, you will have that information included.

Even when you have the tag in place there is still the issue of how to allocate those costs incurred before the resource was tagged.  Because AWS tags are point in time, only costs associated after the tag is in place will be included in any AWS tag report. 2nd Watch’s cloud financial management tool, CMP|FM, is a powerful resource that can provide accurate cost accounting and deep, financial insight into Amazon WorkSpaces usage by applying boundaries by month to all tags.  In other words, any tag applied during the middle of the month will be applied to the entire month’s usage— appropriately accounting for all of your costs associated with Amazon WorkSpaces—without the need to manually allocate them to the correct department.

If you are looking to deploy Amazon WorkSpaces across your enterprise, it is important to ensure that you have the systems in place for proper cost accounting.  This includes implementing documented processes for tagging during launch and automation to identify and manage untagged instances, and leveraging powerful tools like 2nd Watch CMP|FM for all your cost allocation needs to ensure accurate cost accounting.

— Timothy Hill, Senior Product Manager, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

How to Choose the Right Hyperscale Managed Service Provider (MSP)

One of the challenges that many businesses struggle to overcome is how to keep up with the massive (and on-going) changes in technology and implement best practices for managing them.  The Public Cloud­—in particular, Hyperscale Cloud providers like AWS—has ushered in a new era of IT technology. This technology changes rapidly and is designed to provide businesses with the building blocks that allow IT organizations to focus on innovation and growth, rather than mess with things that don’t differentiate their business.

A Hyperscale Managed Services Provider (MSP) can help address a very important gap for many businesses that struggle to:

  • Keep up with the frenetic pace of change in Public Cloud
  • Define and use best practices to achieve superior results
  • Manage their infrastructure the most efficient way possible

 

In most cases, Hyperscale MSPs have deep expertise, technology, and automated capabilities to deliver high-quality managed service on a hyperscale platform. And because Hyperscale MSPs are solely focused to deliver capabilities on the cloud IaaS and PaaS that today’s enterprises are using, they are well versed in the best practices and standards to achieve the right results for their clients.

So, how do you go about selecting the right MSP?  The answer to this question is critical because we believe choosing the right MSP is one of the most important decisions you will make when consuming the public cloud.  It is also important to note that some of the qualifications to look for when selecting a Hyperscale MSP for your business needs are obvious, while others are more elusive.  I’ve included a few suggestions below to keep in mind when evaluating and selecting the right Hyperscale MSP.

Expertise on the Platform of Your Choice

First and foremost, no two public cloud providers are the same.  Each provider implements MSP strategies differently—from infrastructure and redundancy, to automation and billing concepts.  Secondly, it isn’t enough for a provider to tell you they have a few applications running on the platform.  When looking to entrust someone with your most valuable assets, expertise is key!  An important KPI for measuring the capabilities of a MSP that many businesses overlook is the provider’s depth and breadth of experience. A qualified Hyperscale MSP will have the right certifications, accreditations, and certified engineer-to-customer ratios.  You may feel good about signing with a large provider because they claim a higher number of certified engineers than the smaller firms, until…you realize their certified engineer-to-customer ratio is out of whack.  Having 200 certified engineers means nothing if you have 5,000+ customers.  At 2nd Watch, we have more certified engineers than we do customers, and we like it that way.

The Focus is on Customer Value

This is an obvious recommendation, but it does have some nuances.  Many MSPs will simply take the “Your mess for less” approach to managing your infrastructure.  Our customers tell us that one of the reasons they chose 2nd Watch was our focus on the things that matter to them.  There are many MSPs that have technical capabilities to manage Cloud infrastructure but not all are able to focus in on how an enterprise wants to use the Public Cloud.  MSPs with the ability to understand their client’s needs and goals tailor their approach to work for the enterprise vs. making them snap to some preconceived notion of how these things should work or function.  Find an MSP that is willing to make the Public Cloud work the way you want it to and your overall experience, and the outcome, will be game changing.

Optimize, Optimize, Optimize

Moving to the Public Cloud is just the first step in the journey to realizing business value and transforming IT.  The Cloud is dynamic in nature, and due to that fact, it is important that you don’t rest on just a migration once you are using it.  New instance types, new services, or just optimizing what you are running today are great ways to ensure your infrastructure is running at top notch.  It is important to make sure your MSP has a strong, ongoing story about optimization and how they will provide it.  At 2nd Watch, we break optimization into 3 categories:  Financial Optimization, Technical Optimization and Operations Optimization.  It is a good idea to ask your MSP how they handle these three facets of optimization and at what cadence.  Keep in mind that some providers’ pricing structures can act as a disincentive for optimization.  For example, if your MSP’s billing structure is based on a percentage of your total cloud spend, and they reduce that bill by 30% through optimization efforts, that means they are now getting paid less, proportionately, and are likely not motivated to do this type of optimization on a regular basis as it hurts their revenue.  Alternatively, we have also seen MSPs charge extra for these types of services, so the key is to make sure you ask if it’s included and get details about the services that would be considered an extra charge.

Full Service

The final qualification to look for in a Hyperscale MSP is whether they are a full-service provider.  Too often, pure play MSPs are not able to provide a full service offering under their umbrella.  The most common reason is that they lack professional services to assess and migrate workloads or cloud architects to build out new functionality.

Our enterprise clients tell us that one of their major frustrations is having to work with multiple vendors on a project.  With multiple vendors, it is difficult to keep track of who is accountable and for what they are accountable.  Why would the vendor that is migrating be motivated to make sure the application is optimized for support if they aren’t providing the support?  I have heard horror stories of businesses trying to move to the cloud and becoming frustrated that multiple vendors are involved on the same workload, because the vendors blame each other for missing deadlines or not delivering key milestones or technical content.  Your business will be better served by hiring an MSP who can run the full cloud-migration process—from workload assessment and migration to managing and optimizing your cloud infrastructure on an ongoing basis.

In addition to the tips I have listed above, 2nd Watch recommends utilizing Gartner’s Magic Quadrant to help evaluate the various public cloud managed service providers available to you. Gartner positioned 2nd Watch in the Leaders quadrant of the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide for our completeness of vision and ability to execute.  You can download and read the full report here.

 

-Kris Bliesner, CTO

 

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document.

Facebooktwittergoogle_pluslinkedinmailrss

2nd Watch Named "Leader" in Gartner’s New Magic Quadrant for Public Cloud Managed Service Providers Report

2nd Watch is honored to be named a leader in the 2017 Gartner “Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide” report.  We want to thank our customers that have partnered with us throughout the years and our employees who are key to 2nd Watch’s continued success and the success of our customers.

What are the contributing factors to our success?

  • One of the longest track records in AWS consulting services and a very close partnership with AWS. We understand the AWS environment and how to best operate within it, and we have numerous customer case studies, large-scale implementations, and a solid track record of positive customer experiences and strong customer retention. We have in-depth expertise in helping traditional businesses launch and operate digital business offerings.
  • A well-established cloud migration factory. Our professional services help enterprise customers with cloud readiness assessments, security assessments and cloud governance structures. We also assist customers with re-engineering IT processes for the cloud and integrating cloud processes with other established business processes.
  • Our Cloud Management Platform, which provides policy-driven governance capabilities, while still allowing the customer to fully exploit the underlying cloud platform

Gartner positioned 2nd Watch in the Leaders quadrant for its ability to execute and completeness of vision.  We are all-in with AWS Cloud and are committed to the success of our clients as evidenced in our use cases.

Some of the world’s largest enterprises partner with 2nd Watch for our ability to deliver tailored and integrated management solutions that holistically and proactively encompass the operating, financial, and technical requirements for public cloud adoption. In the end, customers gain more leverage from the cloud with a lot less risk.

We look forward to continued success in 2017 and beyond through successful customer implementations and ongoing management. To find out how we can partner with your company, visit us here.

Access Gartner’s “Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide” report, compliments of 2nd Watch.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

Facebooktwittergoogle_pluslinkedinmailrss

The Most Popular AWS Products of 2016

We know from the past 5 years of Gartner Magic Quadrants that AWS is a leader among IaaS vendors, placing the furthest for ‘completeness of vision’ and ‘ability to execute.’ AWS’ rapid pace of innovation contributes to its position as the leader in the space. The cloud provider releases hundreds of product and service updates every year. So, which of those are the most popular amongst our enterprise clients?

We analyzed data from our customers for the year, from a combined 100,000+ instances running monthly. The most popular AWS products and services, represented by the percentage of 2nd Watch customers utilizing them in 2016, include Amazon’s two core services for compute and storage – EC2 and S3 – and Amazon Data Transfer, each at 100% usage. Other high-ranking products include Simple Queue Service (SQS) for message queuing (84%) and Amazon Relational Database Service or RDS (72%). Usage for these services remains fairly consistent, and we would expect to see these services across most AWS deployments.

There are some relatively new AWS products and services that made the “most-popular” list for 2016 as well. AWS Lambda serverless computing (38%), Amazon WorkSpaces, a secure virtual desktop service (27%), and Kinesis, a real-time streaming data platform (12%), are quickly being adopted by AWS users and rising in popularity.

The fas-growing services in 2016, based on CAGR, include AWS CloudTrail (48%), Kinesis (30%), Config for resource inventory, configuration history, and change notifications (24%), Elasticsearch Service for real-time search and analytics (22%), Elastic MapReduce, a tool for big data processing and analysis, (20%) and Redshift, the data warehouse service alternative to systems from HP, Oracle and IBM (14%).

The accelerated use of these products demonstrates how quickly new cloud technologies are becoming the standard in today’s evolving market. Enterprises are moving away from legacy systems to cloud platforms for everything from back-end systems to business-critical, consumer-facing assets. We expect growth in each of these categories to continue as large organizations realize the benefits and ease of using these technologies.

Download the 30 Most Popular AWS Products infographic to find out which others are in high-demand.

-Jeff Aden, Co-Founder & EVP Business Development & Marketing

Facebooktwittergoogle_pluslinkedinmailrss

"Taste the Feeling" of Innovation

Coca-Cola North America Information Technology Leads the Pack

On May 4th 2016, Coca-Cola North America Information Technology and Splunk Inc. (NASDAQ: SPLK), provider of the leading software platform for real-time Operational Intelligence, announced that Coca-Cola North America Information Technology was named to this year’s InformationWeek Elite 100, a list of the top business technology innovators in the United States. Coca-Cola North America Information Technology, a Splunk customer, is being honored for the company’s marketing transformation initiative.

Coca-Cola North America Information Technology division is a leader in migrating to the cloud and leveraging cloud native technologies.  The division re-architected its digital marketing platform to leverage cloud technology to create business insights and flexibility and to take advantage of scale and innovations of the public cloud offered by Amazon Web Services.

“The success you see from our digital marketing transformation is due to our intentional focus on innovation and agility as well as results, our team’s ingenuity and our partnership with top technology companies like Splunk,” said Michelle Routh, chief information officer, Coca-Cola North America. “We recognized a chance for IT to collaborate much more closely with the marketing arm of Coca-Cola North America to bring an unparalleled digital marketing experience to our business and our customers. We have moved technologies to the cloud to scale our campaigns, used big data analytics, beacon and Internet of Things technologies to provide our customers with unique, tailored experiences.”

Coca-Cola North America Information Technology is one of the most innovative customers we have seen today. They are able to analyze data that was previously not available to them through the use of Splunk® Enterprise software. Business insights include trending flavor mixes, usage data and geographical behavior on its popular Freestyle machines to help improve fulfillment and marketing offers.

We congratulate both Coca-Cola North America Information Technology division and Splunk for being named in InformationWeek Elite 100! Read more

-Jeff Aden, EVP Strategic Business Development & Marketing, Co-Founder

Facebooktwittergoogle_pluslinkedinmailrss