The Product Development team at 2nd Watch is responsible for many technology environments that support our software and solutions—and ultimately, our customers. These environments need to be easily built, maintained, and kept in sync. In 2016, 2nd Watch performed an analysis on the amount of AWS billing data that we had collected and the number of payer accounts we had processed over the course of the previous year. Our analysis showed that these measurements had more than tripled from 2015 and projections showed that we will continue to grow at the same, rapid pace with AWS usage and client onboarding increasing daily. Knowing that the storage of data is critical for many systems, our Product Development team underwent an evaluation of the database architecture used to house our company’s billing data—a single SQL Server instance running a Web edition of SQL Server with the maximum number of EBS volumes attached.
During the evaluation, areas such as performance, scaling, availability, maintenance and cost were considered and deemed most important for future success. The evaluation revealed that our current billing database architecture could not meet the criteria laid out to keep pace with growth. Considerations were made to increase the storage capacity by one VM to the maximum family size or potentially upgrade to MS SQL Enterprise. In either scenario, the cost of the MS SQL instance doubled. The only option for scaling without substantially increasing our cost was to scale vertically, however, to do so would result in diminishing performance gains. Maintenance of the database had become a full-time job that was increasingly difficult to manage.
Ultimately, we chose the cloud-native solution, Amazon Aurora, for its scalability, low-risk, easy-to-use technology. Amazon Aurora is a MySQL relational database that provides speed and reliability while being delivered at a lower cost. It offers greater than 99.99% availability and can store up to 64TB of data. Aurora is self-healing and fully managed, which, along with the other key features, made Amazon Aurora an easy choice as we continue to meet the AWS billing usage demands of our customers and prepare for future growth.
The conversion from MS SQL to Amazon Aurora was successfully completed in early 2017 and, with the benefits and features that Amazon Aurora offers, many gains were made in multiple areas. Product Development can now reduce the complexity of database schemas because of the way Aurora stores data. For example, a database with one hundred tables and hundreds of stored procedures was reduced to one table with 10 stored procedures. Gains were made in performance as well. The billing system produces thousands of queries per minute and Amazon Aurora handles the load with the ability to scale to accommodate the increasing number of queries. Maintenance of the Amazon Aurora system is now virtually managed. Tasks such as database backups are automated without the complicated task of managing disks. Additionally, data is copied across six replicas in three availability zones which ensures availability and durability.
With Amazon Aurora, every environment is now easily built and setup using Terraform. All infrastructure is automatically setup—from the web tier to the database tier—with Amazon CloudWatch logs to alert the company when issues occur. Data can easily be imported using automated processes and even anonymized if there is sensitive data or the environment is used to demo to our customers. With the conversion of our database architecture from a single MS SQL Service instance to Amazon Aurora, our Product Development team can now focus on accelerating development instead of maintaining its data storage system.
Earlier this month, AWS announced that it was integrating its Route 53 DNS service with its CloudWatch management tools. That’s a fantastic development for folks managing EC2 architectures who want monitoring and alerting to accompany their DNS management.
If you’re unfamiliar with DNS, an over-simplified analogy is a phonebook. Like a phonebook manages person or business name to phone number translation, DNS manages the translation of Internet domain names (e.g. website address: www.2ndwatch.com) to IP addresses (22.214.171.124). There’s really a lot more to DNS than that, but that is outside of the scope of what we are covering in this article, and there are some very good books on the matter if you’re interested in learning more. For those unfamiliar with Route 53, it is Amazon’s in-house simple DNS service designed with high availability, scalability, automation, and ease of use in mind. It can be a great tool for businesses with agile cloud services, basic (or even large-scale) DNS needs. Route 53 lets you build and easily manage your public DNS records as well as your private internal VPC DNS records if you so choose. In addition to the web-based AWS Management Console, Route 53 has its own API so you can fully manage your DNS zones and records in a programmatic fashion by integrating with your existing and future applications/tools. Here is a link to the Route 53 developer tools. There are also a number of free tools out there that others have written to leverage the Route 53 API. One of the more useful ones I’ve seen is a tool written in Python using the Boto lib called cli53. Here’s a good article on it and a link to the github site where you can download the la verison.
Route 53 is typically operated through the AWS Management Console. Within the management console navigation you can see two main sections: “Hosted Zones” and “Health Checks”. DNS records managed with Route 53 get organized into the “Hosted Zones” section while the “Health Checks” section is used to house simple checks that will monitor the health of endpoints used in a dynamic Routing Policy. A hosted zone is simply a DNS zone where you can store the DNS records for a particular domain name. Upon creation of a hosted zone Route 53 assigns four AWS nameservers as the zone’s “Delegation Set” and the zone apex NS records. At that point you can create, delete, or edit records and begin managing your zone. Bringing them into the real world is simply a matter of updating your master nameservers with the registrar for your domain with the four nameservers assigned to your zone’s Delegation Set.
If you’ve already got a web site or a zone you are hosting outside of AWS, you can easily import your existing DNS zone(s) and records into Route 53. You can do this either manually (using the AWS Management Console, which works OK if you only have a handful of records) or by using a command-line tool like cli53. Another option is to utilize a tool like Python and the Boto library to access the Route 53 API. If the scripting and automation pieces are out of your wheelhouse and you have a bunch of records you need to migrate don’t worry, 2nd Watch has experienced engineers who can assist you with just this sort of thing! Once you have your zones and records imported into Route 53 all that’s left to do is update your master nameservers with your domain registrar. The master nameservers are the ones assigned as the “Delegation Set” when you create your zone in Route 53. These will now be the authoritative nameservers for your zones.
Route 53 supports the standard DNS record types: A, AAAA, CNAME, MX, NS, PTR, SOA, SPF, SRV, and TXT. In addition, Route 53 includes a special type of resource record called an “alias,” which is an extension to standard DNS functionality. The alias is a type of record within a Route 53 hosted zone which is similar to a CNAME but with some important differences. The alias record maps to other AWS services like CloudFront, ELB load balancers, S3 buckets with static web-hosting enabled, or a Route 53 resource record in the same hosted zone. They differ from a CNAME record in that they are not visible to resolvers. Resolvers only see the A record and the resulting IP address of the target record. An Alias also supports using a zone apex as the target, which is another feature that standard CNAME records don’t support. This has the advantage of completely masking the somewhat cryptic DNS names associated with CloudFront, S3, ELB, and other resources. It allows you to disguise the fact that you’re utilizing a specific AWS service or resource if you so desire.
One of the more useful features in Route 53 is the support for policy based routing. You can configure it to answer DNS queries with specific IPs from a group based on the following policies:
- Routing traffic to the region with the lowest latency in relation to the client
- If an IP in the group fails its healthcheck Route 53 will no longer answer queries with that IP
- Using specific ratios to direct more or less traffic at certain IPs in the group
- Just standard round-robin DNS which ensures an even distribution of traffic to all IPs in the group
*NOTE: It is important to keep TTL in mind when using routing based policies with healthchecks as a lower TTL will make your applications more responsive to dynamic changes when a failure is detected. This is especially important if you are using the Failover policy as a traffic switch for HA purposes.
As mentioned at the beginning of this article, Amazon recently announced that it has added Route 53 support to CloudWatch. This means you can use CloudWatch to monitor your Route 53 zones, check health, set threshold alarms and trigger events based on health data returned. Any Route 53 health check you configure gets turned into a CloudWatch metric, so it’s constantly available in the CloudWatch management console and viewable in a graph view as well as the raw metrics.
If you’re running your web site off of an EC2 server farm or are planning on making a foray into AWS you should definitely look into both Route 53 and CloudWatch. This combination not only helps with initial DNS configuration, but the CloudWatch integration now makes it especially useful for monitoring and acting upon events in an automated fashion. Check it out.
-Ryan Kennedy, Senior Cloud Engineer