When you think of AWS, the first thing that comes to mind is scalability, high availability, and performance. AWS is now bringing the same features to a relational database by offering Aurora through Amazon RDS. As announced at AWS re:Invent 2014, Amazon Aurora is a fully managed relational database engine that is feature compatible with MySQL 5.6. Though it is currently in preview, it promises to give you the performance and availability of high end commercially available databases with built-in scalability without the administrative overhead.
To create Aurora, AWS has taken a service-oriented approach, making individual layers such as storage, logging and caching scale out while keeping the SQL and transaction layers using the core MySQL engine.
So, let’s talk about storage. Aurora can automatically add storage volume in 10 GB chunks up to 64 TB. This eliminates the need to plan for the data growth upfront and manually intervene when storage needs to be added. This feature is described by AWS to happen seamlessly without any downtime or performance hit.
Aurora’s storage volume is automatically replicated across three AWS Availability Zones (AZ) with two copies of data in each AZ. Aurora uses quorum writes and reads across these copies. It can handle the loss of two copies of data without affecting database write availability and up to 3 copies without affecting read availability. This SSD powered multi-tenant 10 GB storage chunks allows for concurrent access and reduces hot spots making it fault tolerant. It is also self-healing as it continuously scans data blocks for errors and repairs them automatically.
Unlike in a traditional database where it has to replay the redo log since last check point, which is single threaded in MySQL, Aurora’s underlying storage replays redo logs on-demand, in parallel, distributed and asynchronous mode. This allows you to recover from a crash almost instantaneously.
Aurora database can have up to 15 Aurora replicas. Since master and replica share the same underlying storage, you can failover with no data loss. Also, there’s very little load on the master since there’s no log replay, resulting in minimal replica lag of approximately 10 to 20 milliseconds. The cache lives outside of the database process which allows it to survive during a database restart. As a result, operations can be resumed much faster as there is no need to warm the cache. The backup is automatic, incremental and continuous. This enables point-in-time recovery up to the last five minutes.
A new added feature of Aurora is its ability to simulate failure of node, disk or networking components using SQL commands. This allows you to the high availability and scaling features of your application.
According to the Aurora FAQ, it is five times faster than a stock version of MySQL. Using Sysbench “Aurora delivers over 500,000 selects/sec and 100,000 updates/sec running the same benchmark on the same hardware.”
While Aurora may cost slightly more than the RDS for MySQL per instance based on the US-EAST-1, the features may justify it, and you only pay for the storage consumed at a rate of $0.10/GB per month and IOs at a rate of $0.20 per million requests.
It’s exciting to see the challenges of scaling the relational database being addressed by AWS. To learn more, you can sign up for a preview at http://aws.amazon.com/rds/aurora/preview/.
2nd Watch specializes in helping our customers with legacy investments in Oracle achieve successful migrations to Aurora. Our Oracle-certified experts understand the specialized features, high availability and performance capabilities that proprietary components such as RAC, ASM and Data Guard provide and are skilled in delivering low-risk migration roadmaps. We also offer schema and PL/SQL migration guidance, client tool validation, bulk data upload and ETL integration services. To learn more, contact us.
Ali Kugshia, Sr. Cloud Engineer
As firms progress through the transition from traditional IT to the AWS Cloud, there is often a moment of fear and anxiety related to managing cost. The integration and planning group has done an excellent job of selecting the right consulting partner. Contracts have been negotiated by legal and procurement. Capital funding has been allocated to cover the cost of application migrations. Designs are accepted and the project manager has laid out the critical path to success. Then at the last hour, just before production launch, the finance team chimes in – “How are we going to account for each application’s monthly usage?”
So much planning and preparation is put into integration, because we’ve gone through this process with each new application. But moving to the public cloud presents a new challenge, one that’s easily tackled with a well-developed model for managing cost in a usage-based environment.
AWS allows us to deploy IaaS (Infrastructure as a Service), and that infrastructure is shared across all of our applications in the cloud. With the proper implementation of AWS Resource Tags, cloud resources can be associated with unique applications, departments, environments, locations and any other category for which cost-reporting is essential.
Firms must have the right dialog in the design process with their cloud technology partner. Here’s an outline of the five phases of the 2nd Watch AWS Tagging Methodology, which has been used to help companies plan for cloud-based financial management:
Phase 1: Ask Critical Questions – Begin by asking Critical Questions that you want answered about utilization, spending and resource management. Consider ongoing projects, production applications, and development environments. Critical Questions can include: Which AWS resources are affecting my overall monthly bill? What is the running cost of my high availability features like warm standby, pilot light or geo-redundancy? How do architectural changes or application upgrades change my monthly usage?
Phase 2: Develop a Tagging Strategy – The Cloud Architect will interpret these questions and develop a tagging strategy to meet your needs. The strategy then turns into a component of the Detailed Design and later implemented during the build project. During this stage it’s important to consider the enforcement of standards within the organization. Configuration drift is when other groups don’t use standardized AWS Resource Tags, or those defined in a naming convention. Later when it’s time for reporting, this will create problems for accounting and finance.
Phase 3: Determine Which AWS Resources Are In Scope – Solicit feedback from your internal accounting department and application owners. Create a list of AWS Resources and applications that need to be accounted for. Refer frequently to AWS online documentation because the list of taggable resource types is updated often.
Phase 4: Define How Chargebacks and Showbacks Will Be Used – Determine who will receive usage-based reports for each application, environment or location. Some firms have adopted a Chargeback model in which the accounting team bills the internal groups who have contributed to the month’s AWS usage. Others have used these reports for Showback only, where the usage & cost data is used for planning, forecasting and event correlation. 2W Insight offers a robust reporting engine to allow 2nd Watch customers the ability to create, schedule and send reports to stakeholders.
Phase 5: Make Regular Adjustments For Optimization – Talk to your Cloud Architect about automation to eliminate configuration drift. Incorporate AWS tagging standards into your cloud formation templates. Regularly review tag keys and values to identify non-standard use cases. And solicit the feedback of your accounting team to ensure the reports are useful and accurate.
Working with an AWS Premier Consulting Partner is critical to designing for best practices like cost management. Challenge your partner and ask for real-world examples of AWS Resource Tagging strategies and cost reports. Planning to manage costs in the cloud is not a step that should be skipped. It’s critical to incorporate your financial reporting objectives into the technical design early, so that they can become established, standardized processes for each new application in the cloud.
For more information, please reach out to Zachary Bonugli email@example.com.
– Zachary Bonugli, Global Account Manager
With the New Year comes the resolutions. When the clock struck midnight on January 1st, 2015 many people turned the page on 2014 and made a promise to do an act of self-improvement. Often times it’s eating healthier or going to the gym more regularly. With the New Year, I thought I could put a spin on a typical New Year’s Resolution and make it about AWS.
How could you improve on your AWS environment? Without getting too overzealous, let’s focus on the fundamental AWS network infrastructure, specifically an AWS Virtual Private Cloud (VPC). An AWS VPC is a logically isolated, user controlled, piece of the AWS Cloud where you can launch and use other AWS resources. You can think of it as your own slice of AWS network infrastructure that you can fully customize and tailor to your needs. So let’s talk about VPCs and how you can improve on yours.
- Make sure you’re using VPCs! The simple act of implementing a VPC can put you way ahead of the game. VPCs provide a ton of customization options from defining your own VPC size via IP addressing; to controlling subnets, route tables, and gateways for controlling network flow between your resources; to even defining fine-grained security using security groups and network ACLs. With VPCs you can control things that simply can’t be done when using EC2-Classic.
- Are you using multiple Availability Zones (AZs)? An AZ is a distinct isolated location engineered to be inaccessible from failures of other AZs. Make sure you take advantage of using multiple AZs with your VPC. Often time instances are just launched into a VPC with no rhyme or reason. It is great practice to use the low-latency nature and engineered isolation of AZs to facilitate high availability or disaster recovery scenarios.
- Are you using VPC security groups? “Of course I am.” Are you using network ACLs? “I know they are available, but I don’t use them.” Are you using AWS Identity and Access Management (IAM) to secure access to your VPCs? “Huh, what’s an IAM?!” Don’t fret, most environments don’t take advantage of all the tools available for securing a VPC, however now is the time reevaluate your VPC and see if you can or even should use these security options. Security groups are ingress and egress firewall rules you place on individual AWS resources in your VPC and one of the fundamental building blocks of an environment. Now may be a good time to audit the security groups to make sure you’re using the principle of least privilege, or not allowing any access or rules that are not absolutely needed. Network ACLs work at the subnet level and may be useful in some cases. In larger environments IAM may be a good idea if you want more control of how resources interact with your VPC. In any case there is never a bad time to reevaluate security of your environment, particularly your VPC.
- Clean up your VPC! One of the most common issues in AWS environments are resources that are not being used. Now may be a good time to audit your VPC and take note of what instances you have out there and make sure you don’t have resources racking up unnecessary charges. It’s a good idea to account for all instances, leftover EBS volumes, and even clean up old AMIs that may be sitting in your account. There are also things like extra EIPs, security groups, and subnets that can be cleaned up. One great tool to use would be AWS Trusted Advisor. Per the AWS service page, “Trusted Advisor inspects your AWS environment and finds opportunities to save money, improve system performance and reliability, or help close security gaps.”
- Bring your VPC home. AWS, being a public cloud provider, allows you to create VPCs that are isolated from everything, including your on-premise LAN or datacenter. Because of this isolation all network activity between the user and their VPC happens over the internet. One of the great things about VPCs are the many types of connectivity options they provide. Now is the time to reevelautate how you use VPCs in conjunction with your local LAN environment. Maybe it is time to setup a VPN and turn your environment into a hybrid cloud and physical environment allowing all communication to pass over a private network. You can even take it one step further by incorporating AWS Direct Connect, a service that allows you to establish private connectivity between AWS and your datacenter, office, or colocation environment. This can help reduce your network costs, increase bandwidth throughput, and provide a more consistent overall network experience.
These are just a few things you can do when reevaluating your AWS VPC for the New Year. By following these guidelines you can gain efficiencies in your environment you didn’t have before and can rest assured your environment is in the best shape possible for all your new AWS goals of 2015.
-Derek Baltazar, Senior Cloud Engineer
If you missed this breakout session at AWS re:Invent 2014, don’t miss this session recording. Learn how Yamaha and 2nd Watch migrated Yamaha’s data center to Amazon Web Services in this AWS re:Invent 2014 breakout session recording.
When Yamaha Corporation needed to reduce infrastructure cost, AWS was the solution. The following video talks about how Yamaha and 2nd Watch migrated mission-critical applications, configured Availability Zones for data replication, configured disaster recovery for Oracle E-Business Suite, and designed file system backups for Yamaha’s environment on AWS.
Information on AWS re:Invent
AWS re:Invent is a learning conference that offers 3 days of technical content so attendees can dive deeper into the AWS cloud computing platform. The event is ideal for developers, architects, and technical decision makers – as well as AWS partners, press, and analysts interested in cloud computing. The majority of the conference content is focused on technical deep dives for existing AWS customers, but there is also content covering new service announcements, overviews of existing services, and content for executive decision makers.
The next AWS re:Invent will be held October 5-9, 2015 at the Venetian Hotel in Las Vegas, NV. For more information click here: AWS re:Invent 2015
In an effort to simplify the Reserved Instances (RI) model, AWS announced yesterday a change in the model based on customer feedback and purchasing patterns.
AWS will move from three types of RIs – Fixed Price: Heavy, Medium and Light Utilization RIs – to a single type with three payment options. All continue to provide capacity assurance and discounts when compared to On-Demand prices.
The three new payment options give you flexibility to pay for the entire RI upfront, a portion of the RI upfront and a portion over the term, or nothing upfront and the entire RI over the course of the term.
What does this mean for you? These changes will really benefit predictable workloads that are running >30% of the time. In cases where usage is less consistent, it may be better for companies to stick with on-demand rates. We’ve developed some related research on usage trends. Meanwhile, our role as a top AWS partner continues to be simplifying procurement of all AWS products and services.
Download the AWS Usage infographic
Read more about the new RI model.
Check out our new AWS Scorecard for a look at what we’re seeing companies typically use for their cloud services. Taken from AWS usage trends among 2nd Watch customers for July-October, 2014.
Download the Scorecard
Organizations using Amazon EC2 are typically broken down in the following percentages:
- 38% use Small instances
- 19% use Medium
- 15% use XLarge
- The very large (which include 2XLarge, 4XLarge and 8XLarge), and the very small (Micro) account for only 27% collectively.
Among our customers:
- 94% use Amazon’s Simple Storage Service (S3)
- 66% use Amazon’s Simple Notification Service (SNS) for push messaging
- 41% use Amazon’s Relational Database Service (RDS) to set up, operate, and scale a relational database in the cloud.
Around three-quarters of customers run Linux instances, with the remaining using Windows. However Windows systems accounted for 31% of all computing hours, and more money is typically spent on Windows instances.