In an effort to simplify the Reserved Instances (RI) model, AWS announced yesterday a change in the model based on customer feedback and purchasing patterns.
AWS will move from three types of RIs – Fixed Price: Heavy, Medium and Light Utilization RIs – to a single type with three payment options. All continue to provide capacity assurance and discounts when compared to On-Demand prices.
The three new payment options give you flexibility to pay for the entire RI upfront, a portion of the RI upfront and a portion over the term, or nothing upfront and the entire RI over the course of the term.
What does this mean for you? These changes will really benefit predictable workloads that are running >30% of the time. In cases where usage is less consistent, it may be better for companies to stick with on-demand rates. We’ve developed some related research on usage trends. Meanwhile, our role as a top AWS partner continues to be simplifying procurement of all AWS products and services.
Download the AWS Usage infographic
Read more about the new RI model.
Check out our new AWS Scorecard for a look at what we’re seeing companies typically use for their cloud services. Taken from AWS usage trends among 2nd Watch customers for July-October, 2014.
Organizations using Amazon EC2 are typically broken down in the following percentages:
- 38% use Small instances
- 19% use Medium
- 15% use XLarge
- The very large (which include 2XLarge, 4XLarge and 8XLarge), and the very small (Micro) account for only 27% collectively.
Among our customers:
- 94% use Amazon’s Simple Storage Service (S3)
- 66% use Amazon’s Simple Notification Service (SNS) for push messaging
- 41% use Amazon’s Relational Database Service (RDS) to set up, operate, and scale a relational database in the cloud.
Around three-quarters of customers run Linux instances, with the remaining using Windows. However Windows systems accounted for 31% of all computing hours, and more money is typically spent on Windows instances.
At AWS re:Invent, Amazon introduced its new EC2 Container Service (ECS). Although not available yet, it promises to be a vital part of the future of the AWS ecosystem. ECS is touted to be a high performance, highly scalable service that allows you to run distributed applications (in the form of Docker containers) on a fully managed cluster of EC2 instances. The main benefits of ECS as described by Amazon are: Easy Cluster Management, High Scale Performance, Flexible Scheduling, Extensible & Portable, Resource Efficiency, AWS Integration, and Security. All of these benefits help you easily build, run, and scale Docker containers in the cloud.
Is the concept of containers new to you? Let’s take a step back and talk about virtualization and the benefits of containers in terms of running web applications.
In its simplest most well-known form, classic computer virtualization is the process of separating the software layer (guest OS) with the hardware layer (physical server). The separation is facilitated by other layers of software (host OS and hypervisor) that act as the go-between. This gives you the ability to run multiple virtual machines on a single piece of physical hardware. This simple explanation is the basis for virtualization technologies including Amazon’s EC2 service.
Now let’s say you want to use the virtual infrastructure to run a web application. In a classic VM you are in charge of installing the OS. EC2 goes one step further than a classic VM as it provides you the virtual infrastructure with a vanilla OS. With EC2, when you fire up an instance you are given the choice of which operating system to run – Amazon Linux, Red Hat, Windows, etc. From there, the common steps needed to run a web application would be to build the application, install the needed binaries and libraries, and start the appropriate services. With a few changes to firewall rules or Security Groups, your application would be online. Congratulations you now have your application running!
So how does containerization help? I like to think of it as containerization takes virtualization one step further. Having the ability to run applications on individual virtual machines or instances is great but can become bulky and difficult to manage. An application that may be only 10-50 MBs still requires all of the binaries, libraries, and the entire guest operating system to function. This can easily require an additional 10-15 GBs, yes gigabytes, not megabytes, for the application to run on its own VM. If you want to run several applications, VM resources and administration overhead multiplies quickly. Containerization technologies like Docker have gained industry popularity for the ability to build, transport and run distributable applications in these smaller self-contained packages. A container includes just the application and needed dependencies. It runs as a separate isolated process on the host operating system and shares the kernel with other containers. This allows it to be highly portable and much more efficient by allowing multiple containerized applications to run on the same system. The beauty of it is that a Docker container is completely portable, so you can run it anywhere – like on a desktop computer, a physical server, VM, or EC2 instance – effectively facilitating faster deployments for development, QA, and production environments.
With ECS, Amazon aims to simplify managing containers even more by allowing you to run distributed applications on a managed cluster of EC2 instances. By having a managed cluster you can concentrate on your containerized applications and not cluster software or a configuration management systems to manage the infrastructure. This would be similar to how RDS is a fully managed database service that allows you to concentrate on your data and not the management and administration of the infrastructure that runs it. The light weight footprint of a container allows the environment to scale up and down quickly with demand, making it a perfect match for the elasticity of EC2. Additionally, AWS provides a set of simple APIs, so you have complete control of the cluster running your containers and the ability to extend and integrate with your current environment.
The initial announcement is definitely intriguing and something to watch closely. The service is currently in preview, but you can sign up for the waitlist here.
-Derek Baltazar – 2nd Watch Senior Cloud Engineer
Learn how you can shut down your data center and migrate to AWS in only 2 months. Alvaro Echeverri, Senior Cloud Engineer at 2nd Watch, discusses how to move data out of your data center quickly in the final part of our 5-part video series.
Learn how you can shut down your data center and migrate to AWS in only 2 months. Imran Ahmed, Practice Director at 2nd Watch, discusses key factors in ensuring a smooth transition of your data center to the cloud in part 4 of our 5-part video series.
Learn how you can shut down your data center and migrate to AWS in only 2 months. Randall Barnes, Principal Architect at 2nd Watch, discusses best practices in migrating large transactional datasets and scripting or coding recommended to orchestrate a clean cutover in part 3 of our 5-part video series.