1-888-317-7920 info@2ndwatch.com

The Most Popular AWS Products of 2016

We know from the past 5 years of Gartner Magic Quadrants that AWS is a leader among IaaS vendors, placing the furthest for ‘completeness of vision’ and ‘ability to execute.’ AWS’ rapid pace of innovation contributes to its position as the leader in the space. The cloud provider releases hundreds of product and service updates every year. So, which of those are the most popular amongst our enterprise clients?

We analyzed data from our customers for the year, from a combined 100,000+ instances running monthly. The most popular AWS products and services, represented by the percentage of 2nd Watch customers utilizing them in 2016, include Amazon’s two core services for compute and storage – EC2 and S3 – and Amazon Data Transfer, each at 100% usage. Other high-ranking products include Simple Queue Service (SQS) for message queuing (84%) and Amazon Relational Database Service or RDS (72%). Usage for these services remains fairly consistent, and we would expect to see these services across most AWS deployments.

There are some relatively new AWS products and services that made the “most-popular” list for 2016 as well. AWS Lambda serverless computing (38%), Amazon WorkSpaces, a secure virtual desktop service (27%), and Kinesis, a real-time streaming data platform (12%), are quickly being adopted by AWS users and rising in popularity.

The fas-growing services in 2016, based on CAGR, include AWS CloudTrail (48%), Kinesis (30%), Config for resource inventory, configuration history, and change notifications (24%), Elasticsearch Service for real-time search and analytics (22%), Elastic MapReduce, a tool for big data processing and analysis, (20%) and Redshift, the data warehouse service alternative to systems from HP, Oracle and IBM (14%).

The accelerated use of these products demonstrates how quickly new cloud technologies are becoming the standard in today’s evolving market. Enterprises are moving away from legacy systems to cloud platforms for everything from back-end systems to business-critical, consumer-facing assets. We expect growth in each of these categories to continue as large organizations realize the benefits and ease of using these technologies.

Download the 30 Most Popular AWS Products infographic to find out which others are in high-demand.

-Jeff Aden, Co-Founder & EVP Business Development & Marketing

Facebooktwittergoogle_pluslinkedinmailrss

Database Migration Service for RDS

For database administrators and engineers, migrating a database can be a major headache. It’s such a headache that it actually prohibits some teams from migrating to AWS’ Relational Database Service (RDS), even though doing so would save them time and money in the long run.

Imagine you’re a DBA for Small Business Y and you want to manage your data in RDS, but you have three terabytes of data with a ton of tables, foreign keys and many dependencies. Back in the day, migrating your SQL Server database to RDS might have looked something like this:

  1. Coordinate with Product Leads to find a time when your business can handle up to a 24-hour outage of source database.
  2. Dump all the existing data into a backup.
  3. Restore the data on an intermediary EC2 SQL Server instance.
  4. Connect to the new RDS instance.
  5. Generate metadata script from the source or intermediary instance.
  6. Execute metadata script on target RDS instance.
  7. Migrate the data to the RDS instance using SQL Server’s Import tool.
  8. Optional: Encounter complications such as import failures, loss of metadata integrity, loss of data, and extremely slow import speeds.
  9. Cry a lot and then sleep for days.

Enter AWS Database Migration Service. This new tool from AWS allows DBAs to complete migrations to RDS, or even to a database instance on EC2, with minimal steps and minimal downtime to the source database. What does that mean for you? No 2AM migrations and no tears.

Migrations with AWS DMS have three simple steps:

  1. Provision a replication instance
  2. Define source and target endpoints
  3. Create one or more tasks to migrate data between source and target

The service automates table-by-table migration into your target database without having to use any backups, dump files, or manually administering an intermediary database server. In addition, you can use the service to continue replicating changes from the source environment until you are ready to cutover. That means your application’s downtime will be just a few minutes instead of several hours or even days.

Another great feature of DMS is that you can watch the progress of each table’s migration in the AWS console. The dashboard allows users to see, in real time, how the migration is going and see exactly where any breakages occur.

Database Migration Svc 1

 

Database Migration Svc 2

If you are planning to use the AWS Database Migration Service soon, here are a few tips to streamline your process:

  • In your target instance, set up your table structure prior to migrating the data. The service will not automatically set up your foreign keys and indexes. Set up your metadata from AWS schema conversion tool (or simple mysqldump). Make sure to use truncate option during schema import so that the tables you create aren’t wiped.
  • If you think you may have extra-large LOBs, use Full LOB Mode to avoid data truncation.
  • Additional best practices for using DMS can be found here.

Enjoy!

-Trish Clark, Oracle DBA

Facebooktwittergoogle_pluslinkedinmailrss

A Deeper Look at AWS Aurora

A few months back we published a blog article titled Introducing Amazon Aurora, which described Amazon’s la RDS RDBMS engine offering Aurora.  AWS Aurora is Amazon’s own internally developed MySQL 5.6 compatible DBMS.

Let’s review what we learned from the last article:

  • MySQL “drop-in” compatibility
  • Roughly 5x performance increase over traditional MySQL
  • Moves from a monolithic to service-oriented approach
  • Dynamically expandable storage (up to 64T) with zero downtime or performance degradation
  • Data storage and IO utilization is “only pay for what you use” ($0.10/GB/mo., $0.20/million IO)
  • High performance SSD backed storage
  • Data is automatically replicated (two copies) across three availability zones
  • Uses quorum writes (4 of 6) to increase write performance
  • Self-healing instant recovery through new parallel, distributed, and asynchronous redo logs
  • Cache remains warmed across DB restarts by decoupling the cache from the DB process
  • Up to 15 read replicas for scaling reads horizontally
  • Any read replica can be promoted to the DB master nearly instantly
  • Simulation of node, disk, or networking failure for ing/HA

In addition to those, I’d like to point out a few more features of Aurora:

  • Designed for 99.99% availability
  • Automatic recovery from instance and storage failures
  • On-volume instant snapshots
  • Continuous incremental off-volume snapshots to S3
  • Automatic restriping, mirror repair, hot spot management, and encryption
  • Backups introduce zero load on the DB
  • 400x (yes TIMES, NOT PERCENT!) lowered read replica lag over MySQL
  • Much improved concurrency handling

That is a pretty impressive list of features/improvements that Aurora buys us over standard MySQL!  Even at the slight increase in Aurora’s RDS run rate, the performance gains more than offset the added run costs in typical use-cases.

So what does that list of features and enhancements translate to in the real world?  What does it ACTUALLY MEAN for the typical AWS customer?  Having worked with a fair number of DBMS platforms over the past 15+ years, I can tell you that this is a pretty significant leap forward.  The small cost increase over MySQL is nominal in light of the performance gains and added benefits.  In fact, the price-to-performance ratio (akin to horsepower-to-weight ratio for any gear heads out there) is advertised as being 4x that of standard MySQL RDS.  This means you will be able to gain similar performance on a significantly smaller instance size.  Combine that with only having to pay for the storage your database is actually consuming (not a pre-allocated chunk) and all of the new features, and choosing Aurora is nearly always going to be your best option.

You should definitely consider using Aurora to replace any of your MySQL or MySQL derivative databases (Oracle MySQL, Percona, Maria).  It’s designed using modern architectural principals, for the Cloud, and with high scalability and fault-tolerance in mind.  Whether you are currently running, or are considering running, your own MySQL DBMS solution on EC2 instances or are using RDS to manage it, you should strongly consider Aurora.  The only exception to this may be if you are using MySQL on a lower-end RDS, EC2, Docker, etc. instance due to lower performance needs and cost considerations.

Because Aurora has some specific performance requirements, it requires db.r3.large, or faster, instances.  In some cases people choose to run smaller instances for their MySQL as they have lower performance and availability needs than what Aurora provides and would prefer the cost savings.  Also, there is no way to run “Aurora” outside of RDS (as it is a platform and not simply an overhauled DBMS), which could be a consideration for those wanting to run it in a dev/ context on micro instances or containers. However, running a 5.6 compatible version of MySQL would provide application congruency between the RDS Aurora instance and the one-off (e.g. a developer’s MySQL 5.6 DB running on a Docker container).

In addition to being an instantly pluggable MySQL replacement, Aurora can be a great option for replacing high-end, expensive commercial DBMS solutions like Oracle.  Switching to Aurora could provide massive cost savings while delivering a simplified, powerful, and highly scalable solution.  The price-to-performance ratio of Aurora is really in a class by itself, and it provides all of the features and performance today’s critical business applications demand from a relational database.  Aurora gives you on-par, or even better, performance, manageability, reliability, and security for around 1/10th of the cost!

To learn how the experts at 2nd Watch can help you get the most out of your cloud database architecture, contact us today.

-Ryan Kennedy, Senior Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

Introducing Amazon Aurora

When you think of AWS, the first thing that comes to mind is scalability, high availability, and performance. AWS is now bringing the same features to a relational database by offering Aurora through Amazon RDS. As announced at AWS re:Invent 2014, Amazon Aurora is a fully managed relational database engine that is feature compatible with MySQL 5.6. Though it is currently in preview, it promises to give you the performance and availability of high end commercially available databases with built-in scalability without the administrative overhead.

To create Aurora, AWS has taken a service-oriented approach, making individual layers such as storage, logging and caching scale out while keeping the SQL and transaction layers using the core MySQL engine.

So, let’s talk about storage. Aurora can automatically add storage volume in 10 GB chunks up to 64 TB. This eliminates the need to plan for the data growth upfront and manually intervene when storage needs to be added. This feature is described by AWS to happen seamlessly without any downtime or performance hit.

Aurora’s storage volume is automatically replicated across three AWS Availability Zones (AZ) with two copies of data in each AZ.  Aurora uses quorum writes and reads across these copies. It can handle the loss of two copies of data without affecting database write availability and up to 3 copies without affecting read availability. This SSD powered multi-tenant 10 GB storage chunks allows for concurrent access and reduces hot spots making it fault tolerant. It is also self-healing as it continuously scans data blocks for errors and repairs them automatically.

Unlike in a traditional database where it has to replay the redo log since last check point, which is single threaded in MySQL, Aurora’s underlying storage replays redo logs on-demand, in parallel, distributed and asynchronous mode. This allows you to recover from a crash almost instantaneously.

Aurora database can have up to 15 Aurora replicas. Since master and replica share the same underlying storage, you can failover with no data loss. Also, there’s very little load on the master since there’s no log replay, resulting in minimal replica lag of approximately 10 to 20 milliseconds. The cache lives outside of the database process which allows it to survive during a database restart. As a result, operations can be resumed much faster as there is no need to warm the cache. The backup is automatic, incremental and continuous. This enables point-in-time recovery up to the last five minutes.

A new added feature of Aurora is its ability to simulate failure of node, disk or networking components using SQL commands. This allows you to the high availability and scaling features of your application.

According to the Aurora FAQ, it is five times faster than a stock version of MySQL. Using Sysbench “Aurora delivers over 500,000 selects/sec and 100,000 updates/sec running the same benchmark on the same hardware.”

While Aurora may cost slightly more than the RDS for MySQL per instance based on the US-EAST-1, the features may justify it, and you only pay for the storage consumed at a rate of $0.10/GB per month and IOs at a rate of $0.20 per million requests.

It’s exciting to see the challenges of scaling the relational database being addressed by AWS. To learn more, you can sign up for a preview at http://aws.amazon.com/rds/aurora/preview/.

2nd Watch specializes in helping our customers with legacy investments in Oracle achieve successful migrations to Aurora.  Our Oracle-certified experts understand the specialized features, high availability and performance capabilities that proprietary components such as RAC, ASM and Data Guard provide and are skilled in delivering low-risk migration roadmaps.  We also offer schema and PL/SQL migration guidance, client tool validation, bulk data upload and ETL integration services. To learn more, contact us.

Ali Kugshia, Sr. Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss

2nd Watch AWS Scorecard

Check out our new AWS Scorecard for a look at what we’re seeing companies typically use for their cloud services. Taken from AWS usage trends among 2nd Watch customers for July-October, 2014.

2ndWatch-AWSscorecard-final

Scorecard Highlights:

Organizations using Amazon EC2 are typically broken down in the following percentages:

  • 38% use Small instances
  • 19% use Medium
  • 15% use XLarge
  • The very large (which include 2XLarge, 4XLarge and 8XLarge),  and the very small (Micro) account for only  27% collectively.

 

Among our customers:

  • 94% use Amazon’s Simple Storage Service (S3)
  • 66% use Amazon’s Simple Notification Service (SNS) for push messaging
  • 41% use Amazon’s Relational Database Service (RDS) to set up, operate, and scale a relational database in the cloud.

 

Around three-quarters of customers run Linux instances, with the remaining using Windows. However Windows systems accounted for 31% of all computing hours, and more money is typically spent on Windows instances.

 

Facebooktwittergoogle_pluslinkedinmailrss