We know from the past 5 years of Gartner Magic Quadrants that AWS is a leader among IaaS vendors, placing the furthest for ‘completeness of vision’ and ‘ability to execute.’ AWS’ rapid pace of innovation contributes to its position as the leader in the space. The cloud provider releases hundreds of product and service updates every year. So, which of those are the most popular amongst our enterprise clients?
We analyzed data from our customers for the year, from a combined 100,000+ instances running monthly. The most popular AWS products and services, represented by the percentage of 2nd Watch customers utilizing them in 2016, include Amazon’s two core services for compute and storage – EC2 and S3 – and Amazon Data Transfer, each at 100% usage. Other high-ranking products include Simple Queue Service (SQS) for message queuing (84%) and Amazon Relational Database Service or RDS (72%). Usage for these services remains fairly consistent, and we would expect to see these services across most AWS deployments.
There are some relatively new AWS products and services that made the “most-popular” list for 2016 as well. AWS Lambda serverless computing (38%), Amazon WorkSpaces, a secure virtual desktop service (27%), and Kinesis, a real-time streaming data platform (12%), are quickly being adopted by AWS users and rising in popularity.
The fas-growing services in 2016, based on CAGR, include AWS CloudTrail (48%), Kinesis (30%), Config for resource inventory, configuration history, and change notifications (24%), Elasticsearch Service for real-time search and analytics (22%), Elastic MapReduce, a tool for big data processing and analysis, (20%) and Redshift, the data warehouse service alternative to systems from HP, Oracle and IBM (14%).
The accelerated use of these products demonstrates how quickly new cloud technologies are becoming the standard in today’s evolving market. Enterprises are moving away from legacy systems to cloud platforms for everything from back-end systems to business-critical, consumer-facing assets. We expect growth in each of these categories to continue as large organizations realize the benefits and ease of using these technologies.
Download the 30 Most Popular AWS Products infographic to find out which others are in high-demand.
-Jeff Aden, Co-Founder & EVP Business Development & Marketing
In an effort to simplify the Reserved Instances (RI) model, AWS announced yesterday a change in the model based on customer feedback and purchasing patterns.
AWS will move from three types of RIs – Fixed Price: Heavy, Medium and Light Utilization RIs – to a single type with three payment options. All continue to provide capacity assurance and discounts when compared to On-Demand prices.
The three new payment options give you flexibility to pay for the entire RI upfront, a portion of the RI upfront and a portion over the term, or nothing upfront and the entire RI over the course of the term.
What does this mean for you? These changes will really benefit predictable workloads that are running >30% of the time. In cases where usage is less consistent, it may be better for companies to stick with on-demand rates. We’ve developed some related research on usage trends. Meanwhile, our role as a top AWS partner continues to be simplifying procurement of all AWS products and services.
Download the AWS Usage infographic
Read more about the new RI model.
Check out our new AWS Scorecard for a look at what we’re seeing companies typically use for their cloud services. Taken from AWS usage trends among 2nd Watch customers for July-October, 2014.
Download the Scorecard
Organizations using Amazon EC2 are typically broken down in the following percentages:
- 38% use Small instances
- 19% use Medium
- 15% use XLarge
- The very large (which include 2XLarge, 4XLarge and 8XLarge), and the very small (Micro) account for only 27% collectively.
Among our customers:
- 94% use Amazon’s Simple Storage Service (S3)
- 66% use Amazon’s Simple Notification Service (SNS) for push messaging
- 41% use Amazon’s Relational Database Service (RDS) to set up, operate, and scale a relational database in the cloud.
Around three-quarters of customers run Linux instances, with the remaining using Windows. However Windows systems accounted for 31% of all computing hours, and more money is typically spent on Windows instances.
At AWS re:Invent, Amazon introduced its new EC2 Container Service (ECS). Although not available yet, it promises to be a vital part of the future of the AWS ecosystem. ECS is touted to be a high performance, highly scalable service that allows you to run distributed applications (in the form of Docker containers) on a fully managed cluster of EC2 instances. The main benefits of ECS as described by Amazon are: Easy Cluster Management, High Scale Performance, Flexible Scheduling, Extensible & Portable, Resource Efficiency, AWS Integration, and Security. All of these benefits help you easily build, run, and scale Docker containers in the cloud.
Is the concept of containers new to you? Let’s take a step back and talk about virtualization and the benefits of containers in terms of running web applications.
In its simplest most well-known form, classic computer virtualization is the process of separating the software layer (guest OS) with the hardware layer (physical server). The separation is facilitated by other layers of software (host OS and hypervisor) that act as the go-between. This gives you the ability to run multiple virtual machines on a single piece of physical hardware. This simple explanation is the basis for virtualization technologies including Amazon’s EC2 service.
Now let’s say you want to use the virtual infrastructure to run a web application. In a classic VM you are in charge of installing the OS. EC2 goes one step further than a classic VM as it provides you the virtual infrastructure with a vanilla OS. With EC2, when you fire up an instance you are given the choice of which operating system to run – Amazon Linux, Red Hat, Windows, etc. From there, the common steps needed to run a web application would be to build the application, install the needed binaries and libraries, and start the appropriate services. With a few changes to firewall rules or Security Groups, your application would be online. Congratulations you now have your application running!
So how does containerization help? I like to think of it as containerization takes virtualization one step further. Having the ability to run applications on individual virtual machines or instances is great but can become bulky and difficult to manage. An application that may be only 10-50 MBs still requires all of the binaries, libraries, and the entire guest operating system to function. This can easily require an additional 10-15 GBs, yes gigabytes, not megabytes, for the application to run on its own VM. If you want to run several applications, VM resources and administration overhead multiplies quickly. Containerization technologies like Docker have gained industry popularity for the ability to build, transport and run distributable applications in these smaller self-contained packages. A container includes just the application and needed dependencies. It runs as a separate isolated process on the host operating system and shares the kernel with other containers. This allows it to be highly portable and much more efficient by allowing multiple containerized applications to run on the same system. The beauty of it is that a Docker container is completely portable, so you can run it anywhere – like on a desktop computer, a physical server, VM, or EC2 instance – effectively facilitating faster deployments for development, QA, and production environments.
With ECS, Amazon aims to simplify managing containers even more by allowing you to run distributed applications on a managed cluster of EC2 instances. By having a managed cluster you can concentrate on your containerized applications and not cluster software or a configuration management systems to manage the infrastructure. This would be similar to how RDS is a fully managed database service that allows you to concentrate on your data and not the management and administration of the infrastructure that runs it. The light weight footprint of a container allows the environment to scale up and down quickly with demand, making it a perfect match for the elasticity of EC2. Additionally, AWS provides a set of simple APIs, so you have complete control of the cluster running your containers and the ability to extend and integrate with your current environment.
The initial announcement is definitely intriguing and something to watch closely. The service is currently in preview, but you can sign up for the waitlist here.
-Derek Baltazar – 2nd Watch Senior Cloud Engineer
One of the things “everyone knows” about migrating to the Cloud is that it saves companies money. Now you don’t need all those expensive data centers with the very physical costs associated with them. So companies migrate to the Cloud and are so sure they will see their costs plummet… then they get their bill for Cloud usage and experience sticker shock. Typically, this is when our customers reengage 2nd Watch – they ask us why it costs so much, what they can do to decrease their costs, and of course everyone’s favorite – why didn’t you tell me it would be so much?
First, in order to know why you are spending so much you need to analyze your environment. I’m not going to go into how Amazon bills and walk you through your entire bill in this blog post. That’s something for another day perhaps. What I do want to look into is how to enable you to see what you have in your Cloud.
Step one: tag it! Amazon gives you the ability to tag almost everything in your environment, including ELB’s, which was most recently added. I always highly recommend to my customers to make use of this feature. Personally, whenever I create something manually or programmatically I add tags to identify what it is, why it’s there, and of course who is paying for it. Even in my sandbox environment, it’s a way to tell colleagues “Don’t delete my stuff!” Programmatically, tags can be added through CloudFormation, Elastic Beanstalk, auto scaling, CLI, as well as third party tools like Puppet and Chef. From a feature perspective, there are very few AWS components that don’t support tags, and more are constantly being added.
That’s all well and good, but how does this help analytics? Tagging is actually is the basis for pretty much all analytics, and without it you have to work much harder for far less information. For example, I can tag EC2 instances to indicate applications, projects, or environments. I can then run reports that look for specific tags – how many EC2 instances are associated with Project X and what are the instance types? What are the business applications using my various RDS instances? – and suddenly when you get your bill, you have the ability to determine who is spending money in your organization and work with them on spending it smartly.
Let’s take it a step further and talk about automation and intelligent Cloud management. If I tag instances properly I can automate tasks to control my Cloud based on those tags. For example, maybe I’m a nice guy and don’t make my development team work weekends. I can set up a task to shutdown any instance with “Environment = Development” tag every Friday evening and start again Monday morning. Maybe I want to have an application only online at month end. I can set up another task to schedule when it is online and offline. Tags give us the ability to see what we are paying for and the hooks to control that cost with automation.
I would be remiss if I didn’t point out that tags are an important part of using some great 2nd Watch offerings that help manage your AWS spend. Please check out 2W Insight for more information and how to gain control over and visibility into your cloud spend.
-Keith Homewood, Cloud Architect