1-888-317-7920 info@2ndwatch.com

Optimizing AWS Costs with Trusted Advisor

Moving to the cloud is one of the best decisions you can make for your business.  The low startup costs, instant elasticity, and near endless scalability have lured many organizations from traditional datacenters to the cloud.  Although cloud startup costs are extremely low, over time the burgeoning use of resources within an AWS account can slowly increase the cost of operating in the cloud.

One service AWS provides to help with watching the costs of an AWS environment is AWS Trusted Advisor.  AWS touts the service as “your customized cloud expert” that helps you monitor resources according to best practices.  Trusted Advisor is a service that runs in the background of your AWS account and gathers information regarding cost optimization, security, fault tolerance, and performance.  Trusted Advisor can be accessed proactively through the support console or can be setup to notify you via weekly email.

The types of Trusted Advisor notifications available for Cost Optimization are Amazon EC2 Reserved Instance Optimization, Low Utilization Amazon EC2 Instances, Idle Load Balancers, Underutilized Amazon EBS Volumes, Unassociated Elastic IP Addresses, and RDS Idle DB Instances.  Within these service types, Trusted Advisor gives you four types of possible statuses; “No problems detected,” “Investigation recommended,” “Action recommended,” and “Not available.”  Each one of these status types give insight to how effectively you are running your account based on the best-practice algorithm the service uses.  In the below example, AWS Trusted Advisor points out $1,892 of potential savings for this account.

Trusted Advisor

Each one of these notifications adds up to the total potential monthly savings.  Here is one “Investigation recommended” notification from the same account. It says “3 of 4 DB instances appear to be Idle. Monthly savings of up to $101 are available by minimizing idle DB Instances.”

Drop Down

Clicking the drop down button reveals more:

Amazon RDS

The full display tells you exactly what resource in your account is causing the alert and even gives you the estimated monthly savings if you were to make changes to the resource.   In this case the three RDS instances are running in Oregon and Ireland.  This particular service is basing the alert on the “Days since last connection,” which is extremely helpful because if there have been no connections to the database in 14+ days, there’s a good chance it’s not even being used.  One of the best things about Trusted Advisor is that it gives the overview broken down by service type and gives just enough information to be simple and useful.  We didn’t have to login to RDS and navigate to the Oregon region or the Ireland region to find this information. It was all gathered by Trusteed Advisor and presented in an easy to read format.  Remember, not all of the information provided may need immediate attention, but it’s nice to have it readily available.   Another great feature is each notification can be downloaded as a Microsoft Excel spreadsheet that allows you to have even more control over the data the service provides.

Armed with the Trusted Advisor tool you can keep a closer eye on your AWS resources and gain insight to optimizing costs on a regular basis.   The Trusted Advisor covers the major AWS services but is only available to accounts with Business or Enterprise-level support.  Overall, it’s a very useful service for watching account costs and keeping an eye on possible red flags on an account.  It definitely doesn’t take the place of diligent implementation and monitoring of resources by a cloud engineer but can help with the process.

– Derek Baltazar, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss

AWS Lowers Cloud Computing Costs AGAIN

Last week, AWS announced their 42nd price reduction since 2008. This significant cut impacts many of their most popular services including EC2, S3, EMR, RDS and ElastiCache. These savings range from 10% to 65%, depending on the service you use.  As you can see from the example below, this customer scenario results in savings of almost $150,000 annually, which represents a 36% savings on these services!!!

This major move not only helps existing AWS users but makes the value proposition of shifting from on-premise to the AWS cloud even greater.  If you are not on AWS now, contact us to learn how we can help you take advantage of this new pricing and everything AWS has to offer.

As an AWS Premier Consulting Partner, our mission is to get you migrated to and running efficiently in the Amazon Web Services (AWS) cloud. The journey to get into the AWS cloud can be complicated, but we’ll guide you along the way and take it from there, so you can concentrate on running your business rather than your IT infrastructure.

2nd Watch provides:

  • Fast and Flawless enterprise grade cloud migration
  • Cloud IT Operations Management that goes beyond basic infrastructure management
  • Cloud cost/usage tracking and analytics that helps you control and allocate costs across your enterprise

AWS Price Reduction_TimH

Facebooktwittergoogle_pluslinkedinmailrss

Increasing Your Cloud Footprint

The jump to the cloud can be a scary proposition.  For an enterprise with systems deeply embedded in traditional infrastructure like back office computer rooms and datacenters the move to the cloud can be daunting. The thought of having all of your data in someone else’s hands can make some IT admins cringe.  However, once you start looking into cloud technologies you start seeing some of the great benefits, especially with providers like Amazon Web Services (AWS).  The cloud can be cost-effective, elastic and scalable, flexible, and secure.  That same IT admin cringing at the thought of their data in someone else’s hands may finally realize that AWS is a bit more secure than a computer rack sitting under an employee’s desk in a remote office.  Once the decision is finally made to “try out” the cloud, the planning phase can begin.

Most of the time the biggest question is, “How do we start with the cloud?”  The answer is to use a phased approach.  By picking applications and workloads that are less mission critical, you can try the newest cloud technologies with less risk.  When deciding which workloads to move, you should ask yourself the following questions; Is there a business need for moving this workload to the cloud?  Is the technology a natural fit for the cloud?  What impact will this have on the business? If all those questions are suitably answered, your workloads will be successful in the cloud.

One great place to start is with archiving and backups.  These types of workloads are important, but the data you’re dealing with is likely just a copy of data you already have, so it is considerably less risky.  The easiest way to start with archives and backups is to try out S3 and Glacier.  Many of today’s backup utilities you may already be using, like Symantec Netbackup  and Veeam Backup & Replication, have cloud versions that can directly backup to AWS. This allows you to use start using the cloud without changing much of your embedded backup processes.  By moving less critical workloads you are taking the first steps in increasing your cloud footprint.

Now that you have moved your backups to AWS using S3 and Glacier, what’s next?  The next logical step would be to try some of the other services AWS offers.  Another workload that can often be moved to the cloud is Disaster Recovery.   DR is an area that will allow you to more AWS services like VPC, EC2, EBS, RDS, Route53 and ELBs.  DR is a perfect way to increase your cloud footprint because it will allow you to construct your current environment, which you should already be very familiar with, in the cloud.  A Pilot Light DR solution is one type of DR solution commonly seen in AWS.  In the Pilot Light scenario the DR site has minimal systems and resources with the core elements already configured to enable rapid recovery once a disaster happens.  To build a Pilot Light DR solution you would create the AWS network infrastructure (VPC), deploy the core AWS building blocks needed for the minimal Pilot Light configuration (EC2, EBS, RDS, and ELBs), and determine the process for recovery (Route53).  When it is time for recovery all the other components can be quickly provisioned to give you a fully working environment. By moving DR to the cloud you’ve increased your cloud footprint even more and are on your way to cloud domination!

The next logical step is to move Test and Dev environments into the cloud. Here you can get creative with the way you use the AWS technologies.  When building systems on AWS make sure to follow the Architecting Best Practices: Designing for failure means nothing will fail, decouple your components, take advantage of elasticity, build security into every layer, think parallel, and don’t fear constraints! Start with proof-of-concept (POC) to the development environment, and use AWS reference architecture to aid in the learning and planning process.  Next your legacy application in the new environment and migrate data.  The POC is not complete until you validate that it works and performance is to your expectations.  Once you get to this point, you can reevaluate the build and optimize it to exact specifications needed. Finally, you’re one step closer to deploying actual production workloads to the cloud!

Production workloads are obviously the most important, but with the phased approach you’ve taken to increase your cloud footprint, it’s not that far of a jump from the other workloads you now have running in AWS.   Some of the important things to remember to be successful with AWS include being aware of the rapid pace of the technology (this includes improved services and price drops), that security is your responsibility as well as Amazon’s, and that there isn’t a one-size-fits-all solution.  Lastly, all workloads you implement in the cloud should still have stringent security and comprehensive monitoring as you would on any of your on-premises systems.

Overall, a phased approach is a great way to start using AWS.  Start with simple services and traditional workloads that have a natural fit for AWS (e.g. backups and archiving).  Next, start to explore other AWS services by building out environments that are familiar to you (e.g. DR). Finally, experiment with POCs and the entire gambit of AWS to benefit for more efficient production operations.  Like many new technologies it takes time for adoption. By increasing your cloud footprint over time you can set expectations for cloud technologies in your enterprise and make it a more comfortable proposition for all.

-Derek Baltazar, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss

Enabling Growth and Watching the Price

One of the main differentiators between traditional on premise data centers and Cloud Computing through AWS is the speed at which businesses can scale their environment.  So often in enterprise environments, IT and business struggle to have adequate capacity when they need it.  Facilities run out of power and cooling, vendors cannot provide systems fast enough or the same type of system is not available, and business needs sometimes come without warning.  AWS scales out to meet these demands in every area.

Compute capacity is expanded, often automatically with auto scaling groups, which add additional server instances as demands dictate.  With auto scaling groups, demands on the environment cause more systems to come online.  Even without auto scaling, systems can be cloned with Amazon Machine Images (AMIs) and started to meet capacity, expand to a new region/geography, or even be shared with a business partner to move collaboration forward.

Beyond compute capacity, storage capacity is a few mouse clicks (or less) away from business needs as well.  Using Amazon S3, storage capacity is simply allocated as it is used dynamically.  Customers do not need to do anything more than add content and storage, and that is far easier than adding disk arrays!  With Elastic Block Storage (EBS), these are added as quickly as compute instances are.  Storage can be added and attached to live instances or replicated across an environment as capacity is demanded.

Growth is great, and we’ve written a great deal about how to take advantage of the elastic nature of AWS before, but what about the second part of the title?  Price!  It’s no secret that as customers use more AWS resources, the price increases.  The more you use, the more you pay; simple.  The differentiators come into play with that same elastic nature; when demand drops, resources can be released and costs saved.  Auto scaling can retire instances as easily as it adds them, storage can be removed when no longer needed, and with usage of resources, bills can actually shrink as you become more proficient in AWS.  (Of course, 2ndWatch Managed Services can also help with that proficiency!)  With traditional data centers, once resources are purchased, you pay the price (often a large one). With the Cloud, resources can be purchased as needed, at just a fraction of the price.

IT wins and business wins – enterprise level computing at its best!

-Keith Homewood, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

AWS S3-Glacier Lifecycle Management

Not long ago, 2nd Watch published an article on Amazon Glacier. In it Caleb provides a great primer on the capabilities of Glacier and the cost benefits.  Now that he’s taken the time to explain what it is, let’s talk about possible use cases for Glacier and how to avoid some of the pitfalls.  As Amazon says, “Amazon Glacier is optimized for data that is infrequently accessed and for which retrieval times of several hours are suitable.”  What immediately comes to mind are backups, but most AWS customers do this through EBS snapshots, which can restore in minutes, while a Glacier recall can take hours.  Rather than looking at the obvious, consider these use cases for Glacier Archival storage: compliance (regulatory or internal process), conversion of paper archives, and application retirement.

Compliance often forces organizations to retain records and backups for years, customers often mention a seven year retention policy based on regulatory compliance.  In seven years, a traditional (on premise) server can be replaced at least once, operating systems are upgraded several times, applications have been upgraded or modified, and backup hardware/software has been changed.  Add to that all the media that would need to be replaced/upgraded and you have every IT department’s nightmare – needing to either maintain old tape hardware or convert all the old backup tapes to the new hardware format (and hope too many haven’t degraded over the years).  Glacier removes the need to worry about the hardware, the media, and the storage fees (currently 1¢ per GB/month in US-East) are tiny compared to the cost of media and storage on premise.  Upload your backup file(s) to S3, setup a lifecycle policy, and you have greatly simplified your archival process while keeping regulatory compliance.

So how do customers create these lifecycle policies so their data automatically moves to Glacier?  From the AWS Management Console, once you have an S3 bucket there is a Property called ‘Lifecycle’ that can manage the migration to Glacier (and possible deletion as well).  Add a rule (or rules) to the S3 bucket that can migrate files based on a filename prefix, how long since their creation date, or how long from an effective date (perhaps 1 day from the current date for things you want to move directly to Glacier).  For the example above, perhaps customers take backup files, move them to S3, then have them move to Glacier after 30 days and delete after 7 years.

Lifecycle Rule

Before we go too far and setup lifecycles, however, one major point should be highlighted: Amazon charges customers based on GB/month stored in Glacier and a one-time fee for each file moved from S3 to Glacier.  Moving a terabyte of data from S3 to Glacier could cost little more than $10/month in storage fees, however, if that data is made up of 1k log files, the one-time fee for that migration can be more than $50,000!  While this is an extreme example, consider data management before archiving.  If at all possible, compress the files into a single file (zip/tar/rar), upload those compressed files to S3 and then archive to Glacier.

-Keith Homewood, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss