1-888-317-7920 info@2ndwatch.com

Optimizing AWS Costs with Trusted Advisor

Moving to the cloud is one of the best decisions you can make for your business.  The low startup costs, instant elasticity, and near endless scalability have lured many organizations from traditional datacenters to the cloud.  Although cloud startup costs are extremely low, over time the burgeoning use of resources within an AWS account can slowly increase the cost of operating in the cloud.

One service AWS provides to help with watching the costs of an AWS environment is AWS Trusted Advisor.  AWS touts the service as “your customized cloud expert” that helps you monitor resources according to best practices.  Trusted Advisor is a service that runs in the background of your AWS account and gathers information regarding cost optimization, security, fault tolerance, and performance.  Trusted Advisor can be accessed proactively through the support console or can be setup to notify you via weekly email.

The types of Trusted Advisor notifications available for Cost Optimization are Amazon EC2 Reserved Instance Optimization, Low Utilization Amazon EC2 Instances, Idle Load Balancers, Underutilized Amazon EBS Volumes, Unassociated Elastic IP Addresses, and RDS Idle DB Instances.  Within these service types, Trusted Advisor gives you four types of possible statuses; “No problems detected,” “Investigation recommended,” “Action recommended,” and “Not available.”  Each one of these status types give insight to how effectively you are running your account based on the best-practice algorithm the service uses.  In the below example, AWS Trusted Advisor points out $1,892 of potential savings for this account.

Trusted Advisor

Each one of these notifications adds up to the total potential monthly savings.  Here is one “Investigation recommended” notification from the same account. It says “3 of 4 DB instances appear to be Idle. Monthly savings of up to $101 are available by minimizing idle DB Instances.”

Drop Down

Clicking the drop down button reveals more:

Amazon RDS

The full display tells you exactly what resource in your account is causing the alert and even gives you the estimated monthly savings if you were to make changes to the resource.   In this case the three RDS instances are running in Oregon and Ireland.  This particular service is basing the alert on the “Days since last connection,” which is extremely helpful because if there have been no connections to the database in 14+ days, there’s a good chance it’s not even being used.  One of the best things about Trusted Advisor is that it gives the overview broken down by service type and gives just enough information to be simple and useful.  We didn’t have to login to RDS and navigate to the Oregon region or the Ireland region to find this information. It was all gathered by Trusteed Advisor and presented in an easy to read format.  Remember, not all of the information provided may need immediate attention, but it’s nice to have it readily available.   Another great feature is each notification can be downloaded as a Microsoft Excel spreadsheet that allows you to have even more control over the data the service provides.

Armed with the Trusted Advisor tool you can keep a closer eye on your AWS resources and gain insight to optimizing costs on a regular basis.   The Trusted Advisor covers the major AWS services but is only available to accounts with Business or Enterprise-level support.  Overall, it’s a very useful service for watching account costs and keeping an eye on possible red flags on an account.  It definitely doesn’t take the place of diligent implementation and monitoring of resources by a cloud engineer but can help with the process.

– Derek Baltazar, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss

AWS Lowers Cloud Computing Costs AGAIN

Last week, AWS announced their 42nd price reduction since 2008. This significant cut impacts many of their most popular services including EC2, S3, EMR, RDS and ElastiCache. These savings range from 10% to 65%, depending on the service you use.  As you can see from the example below, this customer scenario results in savings of almost $150,000 annually, which represents a 36% savings on these services!!!

This major move not only helps existing AWS users but makes the value proposition of shifting from on-premise to the AWS cloud even greater.  If you are not on AWS now, contact us to learn how we can help you take advantage of this new pricing and everything AWS has to offer.

As an AWS Premier Consulting Partner, our mission is to get you migrated to and running efficiently in the Amazon Web Services (AWS) cloud. The journey to get into the AWS cloud can be complicated, but we’ll guide you along the way and take it from there, so you can concentrate on running your business rather than your IT infrastructure.

2nd Watch provides:

  • Fast and Flawless enterprise grade cloud migration
  • Cloud IT Operations Management that goes beyond basic infrastructure management
  • Cloud cost/usage tracking and analytics that helps you control and allocate costs across your enterprise

AWS Price Reduction_TimH

Facebooktwittergoogle_pluslinkedinmailrss

Increasing Your Cloud Footprint

The jump to the cloud can be a scary proposition.  For an enterprise with systems deeply embedded in traditional infrastructure like back office computer rooms and datacenters the move to the cloud can be daunting. The thought of having all of your data in someone else’s hands can make some IT admins cringe.  However, once you start looking into cloud technologies you start seeing some of the great benefits, especially with providers like Amazon Web Services (AWS).  The cloud can be cost-effective, elastic and scalable, flexible, and secure.  That same IT admin cringing at the thought of their data in someone else’s hands may finally realize that AWS is a bit more secure than a computer rack sitting under an employee’s desk in a remote office.  Once the decision is finally made to “try out” the cloud, the planning phase can begin.

Most of the time the biggest question is, “How do we start with the cloud?”  The answer is to use a phased approach.  By picking applications and workloads that are less mission critical, you can try the newest cloud technologies with less risk.  When deciding which workloads to move, you should ask yourself the following questions; Is there a business need for moving this workload to the cloud?  Is the technology a natural fit for the cloud?  What impact will this have on the business? If all those questions are suitably answered, your workloads will be successful in the cloud.

One great place to start is with archiving and backups.  These types of workloads are important, but the data you’re dealing with is likely just a copy of data you already have, so it is considerably less risky.  The easiest way to start with archives and backups is to try out S3 and Glacier.  Many of today’s backup utilities you may already be using, like Symantec Netbackup  and Veeam Backup & Replication, have cloud versions that can directly backup to AWS. This allows you to use start using the cloud without changing much of your embedded backup processes.  By moving less critical workloads you are taking the first steps in increasing your cloud footprint.

Now that you have moved your backups to AWS using S3 and Glacier, what’s next?  The next logical step would be to try some of the other services AWS offers.  Another workload that can often be moved to the cloud is Disaster Recovery.   DR is an area that will allow you to more AWS services like VPC, EC2, EBS, RDS, Route53 and ELBs.  DR is a perfect way to increase your cloud footprint because it will allow you to construct your current environment, which you should already be very familiar with, in the cloud.  A Pilot Light DR solution is one type of DR solution commonly seen in AWS.  In the Pilot Light scenario the DR site has minimal systems and resources with the core elements already configured to enable rapid recovery once a disaster happens.  To build a Pilot Light DR solution you would create the AWS network infrastructure (VPC), deploy the core AWS building blocks needed for the minimal Pilot Light configuration (EC2, EBS, RDS, and ELBs), and determine the process for recovery (Route53).  When it is time for recovery all the other components can be quickly provisioned to give you a fully working environment. By moving DR to the cloud you’ve increased your cloud footprint even more and are on your way to cloud domination!

The next logical step is to move Test and Dev environments into the cloud. Here you can get creative with the way you use the AWS technologies.  When building systems on AWS make sure to follow the Architecting Best Practices: Designing for failure means nothing will fail, decouple your components, take advantage of elasticity, build security into every layer, think parallel, and don’t fear constraints! Start with proof-of-concept (POC) to the development environment, and use AWS reference architecture to aid in the learning and planning process.  Next your legacy application in the new environment and migrate data.  The POC is not complete until you validate that it works and performance is to your expectations.  Once you get to this point, you can reevaluate the build and optimize it to exact specifications needed. Finally, you’re one step closer to deploying actual production workloads to the cloud!

Production workloads are obviously the most important, but with the phased approach you’ve taken to increase your cloud footprint, it’s not that far of a jump from the other workloads you now have running in AWS.   Some of the important things to remember to be successful with AWS include being aware of the rapid pace of the technology (this includes improved services and price drops), that security is your responsibility as well as Amazon’s, and that there isn’t a one-size-fits-all solution.  Lastly, all workloads you implement in the cloud should still have stringent security and comprehensive monitoring as you would on any of your on-premises systems.

Overall, a phased approach is a great way to start using AWS.  Start with simple services and traditional workloads that have a natural fit for AWS (e.g. backups and archiving).  Next, start to explore other AWS services by building out environments that are familiar to you (e.g. DR). Finally, experiment with POCs and the entire gambit of AWS to benefit for more efficient production operations.  Like many new technologies it takes time for adoption. By increasing your cloud footprint over time you can set expectations for cloud technologies in your enterprise and make it a more comfortable proposition for all.

-Derek Baltazar, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss

How secure is secure?

There have been numerous articles, blogs, and whitepapers about the security of the Cloud as a business solution.  Amazon Web Services has a site devoted to extolling their security virtues and there are several sites that devote themselves entirely to the ins and outs of AWS security.  So rather than try to tell you about each and every security feature of AWS and try to convince you how secure the environment can be, my goal is to share a real world example of security that can be improved by moving from on premise datacenters to AWS.

Many AWS implementations are used for hosting web applications, most of which are Internet accessible.  Obviously, if your environment is for internal use only you can lock down security even further, but for the interest of this exercise, we’re assuming Internet facing web applications.  The inherent risk, of course, with any Internet accessible application is that accessibility to the Internet provides hackers and malicious users access to your environment as well as honest yet malware/virus/Trojan infected users.

As with on premise and colocation based web farms, AWS offers the standard security practices of isolating customers from one another so that if one customer experiences a security breach, all other customers remain secure.  And of course, AWS Security Groups function like traditional firewalls, allowing traffic only through allowed ports to/from specific destinations/sources.  AWS moves ahead of traditional datacenters starting with Security Groups and Network ACL’s by offering more flexibility to respond to attacks.  Consider the case of a web farm that has components suspected of being compromised; AWS Security Groups can be created in seconds to isolate the suspected components from the rest of the network.  In a traditional datacenter environment, those components may require making complex network changes to move them to isolated networks in order to prevent infection to spread over the network, something AWS blocks by default.

AWS often talks about scalability – able to grow and shrink the environment to meet demands.  That capability also extends to security features as well!  Need another firewall, just add another Security Group, no need to install another device.  Adding another subnet, VPN, firewall, all of these things are done in minutes with no action from on premise staff required.  No more waiting while network cables are moved, hardware is installed or devices are physically reconfigured when you need security updates.

Finally, no matter how secure an environment, no security plan is complete without a remediation plan.  AWS has tools that provide remediation with little to no downtime.  Part of standard practices for AWS environments is to take regular snapshots of EC2 instances (servers).  These snapshots can be used to re-create a compromised or non-functional component in minutes rather than the lengthy restore process for a traditional server.  Additionally, 2nd Watch recommends taking an initial image of each component so that in the event of a failure, there is a fall back point to a known good configuration.

So how secure is secure?  With the ability to respond faster, scale as necessary, and recover in minutes – the Amazon Cloud is pretty darn secure!  And of course, this is only the tip of the iceberg for AWS Cloud Security, more to follow the rest of December here on our blog and please check out the official link above for Amazon’s Security Center and Whitepapers.

-Keith Homewood, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

An Introduction to CloudFormation

An Introduction to CloudFormation

One of the most powerful services in the AWS collection set is CloudFormation. It provides the ability to programmatically construct and manage a grouping of AWS resources in a predictable way.  With CloudFormation, provisioning of an AWS environment does not have to be done through single CLI commands or by clicking through the console, but can be completed through a JSON (Javascript Object Notation) formatted text file, or CloudFormation template.  With CloudFormation, you can build a few or several AWS resources into an environment automatically.  CloudFormation works with several AWS resource types; from AWS network infrastructure (VPCs, Subnets, Routing Tables, Gateways, and Network ACLs), to compute (EC2 and Auto Scaling), to database (RDS and ElastiCache), to storage (S3) components.  You can see the full list here.

The general JSON structure looks like the following:

CloudFormation1

A template has a total of six main sections; AWSTemplateFormatVersion, Description, Parameters, Mappings, Resources, Outputs.   Of these six template sections only “Resources” is required.  However it is always a good idea to have other sections like Description or Parameters. Each AWS resource has numerous resource type identifiers that are used to extend functionality of the particular resource.

Breaking Down a CloudFormation Template

Here is a simple CloudFormation template provided by AWS.  It creates a single EC2 instance:

CloudFormation2

This template uses the Description, Parameters, Resources, and Outputs template sections.  The Description section is just a short description of what the template does. In this case it says the template will, “Create an EC2 instance running the Amazon Linux 32 bit AMI.”  The next section, the Parameters section is allowing the creation of a string value called KeyPair that can be passed to the stack at time of launch.  During stack launch from the console you would see the following dialogue box where you specify all of the editable parameters for that specific launch of the template, in this case there is only one parameter named KeyPair:

CloudFormation3

Notice how the KeyPair Parameter is available for you to enter a string, as well as the description that was also provided of what you should type in the box, “The EC2 Key Pair to allow SSH access to the instance”.  This would be an existing KeyPair in the us-east-1 region that you would use to access the instance once it’s launched.

Next, in the Resources section, the name “Ec2Instance” is defined as the name of the resource and then given the AWS Resource Type “AWS::EC2::Instance”.  The AWS Resource Type defines the type of AWS resource that the template will be deploying at launch and allows you to configure properties for that particular resource.  In this example only KeyName and ImageID are being used for this AWS resource.  For the AWS Resource type “AWS::EC2::Instance“ there are several additional properties you can use in CloudFormation, you can see the full list here.  Digging deeper we see the KeyName value is a reference to the parameter KeyPair that we defined in the Parameters section of the template, thus allowing the instance that the template creates to use the key pair that we defined at launch.  Next, the ImageId is ami-3b355a52 which is an Amazon Linux 32 bit AMI in the us-east-1 region, and why we have to specify a key that exists in the that region.

Finally, there is an Outputs template section which allows you to return values to the console describing the specific resources that were created. In this example the only output defined is “InstanceID”, which is given both a description, “The InstanceId of the newly created EC2 instance”, and a value, { “Ref” : “Ec2Instance” }, which is a reference to the resource that was created.  As you can see in the picture below, the stack launched successfully and the instance id i-5362512b was created.

CloudFormation4

The Outputs section is especially useful for complex templates because it allows you to summarize in one location all of the pertinent information for your deployed stack.  For example if you deployed dozens of machines in a complex SharePoint farm, you could use the outputs section of the template to just show the public facing endpoint, helping you quickly identify the relevant information to get into the environment.

CloudFormation for Disaster Recovery

The fact that CloudFormation templates construct an AWS environment in a consistent and repeatable fashion make them the perfect tool for Disaster Recovery (DR).  By configuring a CloudFormation template to contain all of you production resources you can deploy the same set of resources in another AWS Availability Zone or another Region entirely.  Thus, if one set of resources became unavailable in a disaster scenario, a quick launch of a CloudFormation template would initialize a whole new stack of production ready components.  Built an environment manually through the console and still want to take advantage of CloudFormation for DR? You can use the CloudFormer tool.  CloudFormer helps you construct a CloudFormation template from existing AWS resources.  You can find more information here.  No matter how you construct your CloudFormation template, the final result will be the same, a complete copy of your AWS environment in the form of JSON formatted document that can be deployed over and over.

Benefits of CloudFormation

The previous example is a very simple illustration of a CloudFormation template on AWS.

Here are some highlights:

  1. With a CloudFormation template you can create identical copies of your resources repeatedly, limiting the complex deployment tasks of sometimes several hundred clicks in the console.
  2. All CloudFormation templates are simple JSON structured files that allow you to easily share them and work with them using your current source control processes and favorite editing tools.
  3. CloudFormation templates can start simple and build over time to allow the most complex environments to be repeatedly deployed.  Thus, making them a great tool for DR.
  4. CloudFormation allows you to customize the AWS resources it deploys through use of Parameters that are editable during runtime of the template. For example if you are deploying an auto scaling group of ec2 instances within a VPC it is possible to have a Parameter that allows the creator to select which size of instance will be used for the creation of the stack.
  5. It can be argued, but the best part about CloudFormation is it’s free!

-Derek Baltazar, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss