1-888-317-7920 info@2ndwatch.com

Cost Accounting for Amazon WorkSpaces

Who would have thought, back in 2014, when AWS launched Amazon WorkSpaces it would have such an impact on the virtual desktop market?  Amazon WorkSpaces—AWS’ fully managed, secure desktop computing service—allows enterprises to easily provision cloud-based virtual desktops and provide users access to the documents, applications, and resources they need from any supported device. Over these three short years, Amazon WorkSpaces has made great strides in reducing the costs related to VDI deployment, support and software packaging while improving service levels and deployment time of new applications. Amazon WorkSpaces provides the flexibility to securely work from anywhere, anytime and on any device without the cost and complexity of traditional VDI infrastructure.

However, enterprises have faced a few challenges when deploying Amazon WorkSpaces.  One of the grea challenges with wholesale deployment of Amazon WorkSpaces has been how to allocate the costs associated with thousands of instances to the various departments that are using each resource.  In 2016 AWS enabled users to tag each workspace with up to 50 tags.  While this is a step in the right direction, tagging is not included in the launch process. Instead, users have to remember to tag the instance after it is launched. This is where the process tends to break down, leaving thousands of dollars related to cloud spend either un-allocated or incorrectly allocated.

To address this drawback, it is important to create and implement two processes. The first step is pretty basic: Develop a process and train all team members responsible for launching new WorkSpaces to tag each workspace after it is launched.  The second step is to set up automation to efficiently audit and provide notifications when resources (specifically Amazon WorkSpaces) are launched without a particular tag or set of tags.  Unfortunately, with Amazon WorkSpaces you aren’t able to use the AWS Config “required-tags” rule to enforce your process policy as Config only supports a limited set of AWS resource types. (NOTE: You can check out the AWS Config Developer Guide for more on using it to enforce tag requirements on Config supported resources.) Instead, you can roll your own tag enforcement solution using AWS Lambda and CloudTrail.

This process is fairly simple. When you activate AWS CloudTrail logs, AWS will dump all API calls as JSON log files to an S3 bucket.  You can then setup a trigger on that bucket to invoke an AWS Lambda function that can scan the logs for specific events, such as Amazon WorkSpace’s “CreateWorkSpaces” method. If it finds an event, it can publish a message to an SNS topic notifying you that the resource does not have the appropriate tag.  You can even set the message up to include the creator tag that AWS adds to all new resources. This way, if you need to know who launched the instance in order to determine how to tag it, you will have that information included.

Even when you have the tag in place there is still the issue of how to allocate those costs incurred before the resource was tagged.  Because AWS tags are point in time, only costs associated after the tag is in place will be included in any AWS tag report. 2nd Watch’s cloud financial management tool, CMP|FM, is a powerful resource that can provide accurate cost accounting and deep, financial insight into Amazon WorkSpaces usage by applying boundaries by month to all tags.  In other words, any tag applied during the middle of the month will be applied to the entire month’s usage— appropriately accounting for all of your costs associated with Amazon WorkSpaces—without the need to manually allocate them to the correct department.

If you are looking to deploy Amazon WorkSpaces across your enterprise, it is important to ensure that you have the systems in place for proper cost accounting.  This includes implementing documented processes for tagging during launch and automation to identify and manage untagged instances, and leveraging powerful tools like 2nd Watch CMP|FM for all your cost allocation needs to ensure accurate cost accounting.

— Timothy Hill, Senior Product Manager, 2nd Watch


How to Choose the Right Hyperscale Managed Service Provider (MSP)

One of the challenges that many businesses struggle to overcome is how to keep up with the massive (and on-going) changes in technology and implement best practices for managing them.  The Public Cloud­—in particular, Hyperscale Cloud providers like AWS—has ushered in a new era of IT technology. This technology changes rapidly and is designed to provide businesses with the building blocks that allow IT organizations to focus on innovation and growth, rather than mess with things that don’t differentiate their business.

A Hyperscale Managed Services Provider (MSP) can help address a very important gap for many businesses that struggle to:

  • Keep up with the frenetic pace of change in Public Cloud
  • Define and use best practices to achieve superior results
  • Manage their infrastructure the most efficient way possible


In most cases, Hyperscale MSPs have deep expertise, technology, and automated capabilities to deliver high-quality managed service on a hyperscale platform. And because Hyperscale MSPs are solely focused to deliver capabilities on the cloud IaaS and PaaS that today’s enterprises are using, they are well versed in the best practices and standards to achieve the right results for their clients.

So, how do you go about selecting the right MSP?  The answer to this question is critical because we believe choosing the right MSP is one of the most important decisions you will make when consuming the public cloud.  It is also important to note that some of the qualifications to look for when selecting a Hyperscale MSP for your business needs are obvious, while others are more elusive.  I’ve included a few suggestions below to keep in mind when evaluating and selecting the right Hyperscale MSP.

Expertise on the Platform of Your Choice

First and foremost, no two public cloud providers are the same.  Each provider implements MSP strategies differently—from infrastructure and redundancy, to automation and billing concepts.  Secondly, it isn’t enough for a provider to tell you they have a few applications running on the platform.  When looking to entrust someone with your most valuable assets, expertise is key!  An important KPI for measuring the capabilities of a MSP that many businesses overlook is the provider’s depth and breadth of experience. A qualified Hyperscale MSP will have the right certifications, accreditations, and certified engineer-to-customer ratios.  You may feel good about signing with a large provider because they claim a higher number of certified engineers than the smaller firms, until…you realize their certified engineer-to-customer ratio is out of whack.  Having 200 certified engineers means nothing if you have 5,000+ customers.  At 2nd Watch, we have more certified engineers than we do customers, and we like it that way.

The Focus is on Customer Value

This is an obvious recommendation, but it does have some nuances.  Many MSPs will simply take the “Your mess for less” approach to managing your infrastructure.  Our customers tell us that one of the reasons they chose 2nd Watch was our focus on the things that matter to them.  There are many MSPs that have technical capabilities to manage Cloud infrastructure but not all are able to focus in on how an enterprise wants to use the Public Cloud.  MSPs with the ability to understand their client’s needs and goals tailor their approach to work for the enterprise vs. making them snap to some preconceived notion of how these things should work or function.  Find an MSP that is willing to make the Public Cloud work the way you want it to and your overall experience, and the outcome, will be game changing.

Optimize, Optimize, Optimize

Moving to the Public Cloud is just the first step in the journey to realizing business value and transforming IT.  The Cloud is dynamic in nature, and due to that fact, it is important that you don’t rest on just a migration once you are using it.  New instance types, new services, or just optimizing what you are running today are great ways to ensure your infrastructure is running at top notch.  It is important to make sure your MSP has a strong, ongoing story about optimization and how they will provide it.  At 2nd Watch, we break optimization into 3 categories:  Financial Optimization, Technical Optimization and Operations Optimization.  It is a good idea to ask your MSP how they handle these three facets of optimization and at what cadence.  Keep in mind that some providers’ pricing structures can act as a disincentive for optimization.  For example, if your MSP’s billing structure is based on a percentage of your total cloud spend, and they reduce that bill by 30% through optimization efforts, that means they are now getting paid less, proportionately, and are likely not motivated to do this type of optimization on a regular basis as it hurts their revenue.  Alternatively, we have also seen MSPs charge extra for these types of services, so the key is to make sure you ask if it’s included and get details about the services that would be considered an extra charge.

Full Service

The final qualification to look for in a Hyperscale MSP is whether they are a full-service provider.  Too often, pure play MSPs are not able to provide a full service offering under their umbrella.  The most common reason is that they lack professional services to assess and migrate workloads or cloud architects to build out new functionality.

Our enterprise clients tell us that one of their major frustrations is having to work with multiple vendors on a project.  With multiple vendors, it is difficult to keep track of who is accountable and for what they are accountable.  Why would the vendor that is migrating be motivated to make sure the application is optimized for support if they aren’t providing the support?  I have heard horror stories of businesses trying to move to the cloud and becoming frustrated that multiple vendors are involved on the same workload, because the vendors blame each other for missing deadlines or not delivering key milestones or technical content.  Your business will be better served by hiring an MSP who can run the full cloud-migration process—from workload assessment and migration to managing and optimizing your cloud infrastructure on an ongoing basis.

In addition to the tips I have listed above, 2nd Watch recommends utilizing Gartner’s Magic Quadrant to help evaluate the various public cloud managed service providers available to you. Gartner positioned 2nd Watch in the Leaders quadrant of the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide for our completeness of vision and ability to execute.  You can download and read the full report here.


-Kris Bliesner, CTO



Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document.


2nd Watch Named "Leader" in Gartner's New Magic Quadrant for Public Cloud Managed Service Providers Report

2nd Watch is honored to be named a leader in the 2017 Gartner “Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide” report.  We want to thank our customers that have partnered with us throughout the years and our employees who are key to 2nd Watch’s continued success and the success of our customers.

What are the contributing factors to our success?

  • One of the longest track records in AWS consulting services and a very close partnership with AWS. We understand the AWS environment and how to best operate within it, and we have numerous customer case studies, large-scale implementations, and a solid track record of positive customer experiences and strong customer retention. We have in-depth expertise in helping traditional businesses launch and operate digital business offerings.
  • A well-established cloud migration factory. Our professional services help enterprise customers with cloud readiness assessments, security assessments and cloud governance structures. We also assist customers with re-engineering IT processes for the cloud and integrating cloud processes with other established business processes.
  • Our Cloud Management Platform, which provides policy-driven governance capabilities, while still allowing the customer to fully exploit the underlying cloud platform

Gartner positioned 2nd Watch in the Leaders quadrant for its ability to execute and completeness of vision.  We are all-in with AWS Cloud and are committed to the success of our clients as evidenced in our use cases.

Some of the world’s largest enterprises partner with 2nd Watch for our ability to deliver tailored and integrated management solutions that holistically and proactively encompass the operating, financial, and technical requirements for public cloud adoption. In the end, customers gain more leverage from the cloud with a lot less risk.

We look forward to continued success in 2017 and beyond through successful customer implementations and ongoing management. To find out how we can partner with your company, visit us here.

Access Gartner’s “Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide” report, compliments of 2nd Watch.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.


The Most Popular AWS Products of 2016

We know from the past 5 years of Gartner Magic Quadrants that AWS is a leader among IaaS vendors, placing the furthest for ‘completeness of vision’ and ‘ability to execute.’ AWS’ rapid pace of innovation contributes to its position as the leader in the space. The cloud provider releases hundreds of product and service updates every year. So, which of those are the most popular amongst our enterprise clients?

We analyzed data from our customers for the year, from a combined 100,000+ instances running monthly. The most popular AWS products and services, represented by the percentage of 2nd Watch customers utilizing them in 2016, include Amazon’s two core services for compute and storage – EC2 and S3 – and Amazon Data Transfer, each at 100% usage. Other high-ranking products include Simple Queue Service (SQS) for message queuing (84%) and Amazon Relational Database Service or RDS (72%). Usage for these services remains fairly consistent, and we would expect to see these services across most AWS deployments.

There are some relatively new AWS products and services that made the “most-popular” list for 2016 as well. AWS Lambda serverless computing (38%), Amazon WorkSpaces, a secure virtual desktop service (27%), and Kinesis, a real-time streaming data platform (12%), are quickly being adopted by AWS users and rising in popularity.

The fas-growing services in 2016, based on CAGR, include AWS CloudTrail (48%), Kinesis (30%), Config for resource inventory, configuration history, and change notifications (24%), Elasticsearch Service for real-time search and analytics (22%), Elastic MapReduce, a tool for big data processing and analysis, (20%) and Redshift, the data warehouse service alternative to systems from HP, Oracle and IBM (14%).

The accelerated use of these products demonstrates how quickly new cloud technologies are becoming the standard in today’s evolving market. Enterprises are moving away from legacy systems to cloud platforms for everything from back-end systems to business-critical, consumer-facing assets. We expect growth in each of these categories to continue as large organizations realize the benefits and ease of using these technologies.

Download the 30 Most Popular AWS Products infographic to find out which others are in high-demand.

-Jeff Aden, Co-Founder & EVP Business Development & Marketing


"Taste the Feeling" of Innovation

Coca-Cola North America Information Technology Leads the Pack

On May 4th 2016, Coca-Cola North America Information Technology and Splunk Inc. (NASDAQ: SPLK), provider of the leading software platform for real-time Operational Intelligence, announced that Coca-Cola North America Information Technology was named to this year’s InformationWeek Elite 100, a list of the top business technology innovators in the United States. Coca-Cola North America Information Technology, a Splunk customer, is being honored for the company’s marketing transformation initiative.

Coca-Cola North America Information Technology division is a leader in migrating to the cloud and leveraging cloud native technologies.  The division re-architected its digital marketing platform to leverage cloud technology to create business insights and flexibility and to take advantage of scale and innovations of the public cloud offered by Amazon Web Services.

“The success you see from our digital marketing transformation is due to our intentional focus on innovation and agility as well as results, our team’s ingenuity and our partnership with top technology companies like Splunk,” said Michelle Routh, chief information officer, Coca-Cola North America. “We recognized a chance for IT to collaborate much more closely with the marketing arm of Coca-Cola North America to bring an unparalleled digital marketing experience to our business and our customers. We have moved technologies to the cloud to scale our campaigns, used big data analytics, beacon and Internet of Things technologies to provide our customers with unique, tailored experiences.”

Coca-Cola North America Information Technology is one of the most innovative customers we have seen today. They are able to analyze data that was previously not available to them through the use of Splunk® Enterprise software. Business insights include trending flavor mixes, usage data and geographical behavior on its popular Freestyle machines to help improve fulfillment and marketing offers.

We congratulate both Coca-Cola North America Information Technology division and Splunk for being named in InformationWeek Elite 100! Read more

-Jeff Aden, EVP Strategic Business Development & Marketing, Co-Founder


New AWS NAT Gateway – Things to Consider

Merry Christmas AWS fans!  The new AWS Managed NAT (Network Address Translation) Gateway is here.  While the new NAT Gateway offers lots of obvious advantages, there a couple of things that you’ll want to consider before you terminating those old NAT EC2s.

    1. Redundancy

The new NAT Gateway has redundancy built in for itself, but that won’t deliver the cross-AZ  high availability (HA) that you may have had previously.  In order to achieve full HA in region, you’ll need at least two NAT Gateways with routes for each AZ’s subnets configured appropriately.

    1. Cost

The new NAT Gateway has the normal per hour cost but additionally has a per Gb cost.  This should be nominal in some cases, but if your app has a lot of outbound traffic, you’ll need to factor that in.

    1. Functionality

The new NAT Gateway trades managed ease-of-use for the unlimited functionality of the NAT EC2 Instance.  The NAT server sometimes doubles as a Bastion/Jump box.  Sometimes it’s where innocuous scripts live or could be a good home for Squid (for extra outbound security).  Needless to say, you’ll need to consider whether existing functionality that lives on the NAT can live somewhere else.

    1. Security

The new NAT Gateway will not have a security group attached.  This is important because the inbound NAT security group was a quick way to lock the private subnets from making requests to the Internet on non-standard ports.  With the move to NAT Gateway, you’ll need to revisit all private subnet security groups and introduce the outbound rules that used to live on the single inbound legacy NAT security group.

All in all, the NAT Gateway continues the drive to make AWS simpler and a more managed service.  With the appropriate consideration, this will make your environments more robust and easier to manage. Contact 2nd Watch to learn more.

Coin Graham, Senior Cloud Engineer