1-888-317-7920 info@2ndwatch.com

How to Shut Down Your Data Center in 2 Months: Part 2

2nd Watch has helped hundreds of companies shut down their data centers and migrate to AWS, taking advantage of the cost savings, elasticity and on-demand benefits of the cloud. Many of those companies required tight timeframes of only 1-3 months!

Learn how you can shut down your data center and migrate to AWS in only 2 months. Chris Nolan, Director of Engineering at 2nd Watch, discusses the type of technologies enterprises are using today to help make migrations go faster and what an enterprise should have ready prior to engaging in a migration to ensure it goes quickly and smoothly.

Check back here on our blog or on our YouTube channel for the next video in this 5-part series.

Facebooktwittergoogle_pluslinkedinmailrss

Cloud Analytics – What's in Your Cloud?

One of the things “everyone knows” about migrating to the Cloud is that it saves companies money. Now you don’t need all those expensive data centers with the very physical costs associated with them. So companies migrate to the Cloud and are so sure they will see their costs plummet… then they get their bill for Cloud usage and experience sticker shock. Typically, this is when our customers reengage 2nd Watch – they ask us why it costs so much, what they can do to decrease their costs, and of course everyone’s favorite – why didn’t you tell me it would be so much?

First, in order to know why you are spending so much you need to analyze your environment. I’m not going to go into how Amazon bills and walk you through your entire bill in this blog post. That’s something for another day perhaps. What I do want to look into is how to enable you to see what you have in your Cloud.

Step one: tag it! Amazon gives you the ability to tag almost everything in your environment, including ELB’s, which was most recently added. I always highly recommend to my customers to make use of this feature. Personally, whenever I create something manually or programmatically I add tags to identify what it is, why it’s there, and of course who is paying for it. Even in my sandbox environment, it’s a way to tell colleagues “Don’t delete my stuff!” Programmatically, tags can be added through CloudFormation, Elastic Beanstalk, auto scaling, CLI, as well as third party tools like Puppet and Chef. From a feature perspective, there are very few AWS components that don’t support tags, and more are constantly being added.

That’s all well and good, but how does this help analytics? Tagging is actually is the basis for pretty much all analytics, and without it you have to work much harder for far less information. For example, I can tag EC2 instances to indicate applications, projects, or environments. I can then run reports that look for specific tags – how many EC2 instances are associated with Project X and what are the instance types? What are the business applications using my various RDS instances? – and suddenly when you get your bill, you have the ability to determine who is spending money in your organization and work with them on spending it smartly.

Let’s take it a step further and talk about automation and intelligent Cloud management. If I tag instances properly I can automate tasks to control my Cloud based on those tags. For example, maybe I’m a nice guy and don’t make my development team work weekends. I can set up a task to shutdown any instance with “Environment = Development” tag every Friday evening and start again Monday morning. Maybe I want to have an application only online at month end. I can set up another task to schedule when it is online and offline. Tags give us the ability to see what we are paying for and the hooks to control that cost with automation.

I would be remiss if I didn’t point out that tags are an important part of using some great 2nd Watch offerings that help manage your AWS spend. Please check out 2W Insight for more information and how to gain control over and visibility into your cloud spend.

-Keith Homewood, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

How to Shut Down Your Data Center in 2 Months: Part 1

2nd Watch has helped hundreds of companies shut down their data centers and migrate to AWS, taking advantage of the cost savings, elasticity and on-demand benefits of the cloud. Many of those companies required tight timeframes of only 1-3 months!

Learn how you can shut down your data center and migrate to AWS in only 2 months. Jason Foster, Practice Director South Region at 2nd Watch, discusses the obstacles and challenges associated with planning for application dependencies and techniques for managing logistics and the cadence of your project timline.

Check back here on our blog or on our YouTube channel for the next video in this 5-part series.

Facebooktwittergoogle_pluslinkedinmailrss

Optimizing your AWS environment using Trusted Advisor (Part 2)

AWS provides an oft overlooked tool available to accounts with “Business” or “Enterprise” level support called Trusted Advisor (TA). Trusted Advisor is a tool that analyzes your current AWS resources for ways to improve your environment in the following categories:

  • Cost Optimization
  • Security
  • Performance
  • Fault Tolerance

It rigorously scours your AWS resources for inefficiencies, waste, potential capacity issues, best practices, security holes and much, much more. It provides a very straightforward and easy to use interface for viewing the identified issues.

Trusted Advisor will do everything from detecting EC2 instances that are under-utilized (e.g. using an m3.xlarge for a low traffic NAT instance), to detecting S3 buckets that are good candidates for fronting with a CloudFront distribution, to identifying Security Groups with wide open access to a port(s), and everything in between.

In Amazon’s own words…

[blockquote]AWS Trusted Advisor inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps. Since 2013, customers have viewed over 1.7 million best-practice recommendations in AWS Trusted Advisor in the categories of cost optimization, performance improvement, security, and fault tolerance, and they have realized over $300 million in estimated cost reductions. Currently, Trusted Advisor provides 37 checks; the most popular ones are Low Utilization Amazon EC2 Instances, Amazon EC2 Reserved Instances Optimization, AWS CloudTrail Logging, Amazon EBS Snapshots, and two security group configuration checks.[/blockquote]

This week (7/23/2014) AWS just announced the release of the new Trusted Advisor Console.

Two new features of the TA console I found particularly noteworthy and useful are the Action Links and Access Management.

Action Links allow you to click a hyperlink next to an issue in the TA Console that redirects you to the appropriate place to take action on the issue. Pretty slick… saves you time jumping around tabs in your browser or navigate to the correct Console and menus. Action Links will also take the guess work out of hunting down the correct place if you aren’t that familiar with the AWS Console.

Access Management allows you to use AWS IAM (Identity and Access Management) credentials to control access to specific categories and checks within Trusted Advisor. This gives you the ability to have granular access control over which people in your organization can view and act on specific checks.

In addition to the console, Trusted Advisor also supports API access. And this wouldn’t be my AWS blog post without some kind of coding example using Python and the boto library. The following example code will print out a nicely formatted list of all the Trusted Advisory categories and each of the checks underneath them in alphabetical order.

#!/usr/bin/python
from boto import connect_support
conn = connect_support()
ta_checks = sorted(conn.describe_trusted_advisor_checks('en')['checks'],
                   key=lambda check: check['category'])
for cat in sorted(set([ x['category'] for x in ta_checks ])):
    print "\n%s\n%s" % (cat, '-' * len(cat))
    for check in sorted(ta_checks, key=lambda check: check['name']):
        if check['category'] == cat:
            print "  %s" % check['name']

Here is the resulting output (notice all 37 checks are accounted for):

cost_optimizing
---------------
Amazon EC2 Reserved Instances Optimization
Amazon RDS Idle DB Instances
Amazon Route 53 Latency Resource Record Sets
Idle Load Balancers
Low Utilization Amazon EC2 Instances
Unassociated Elastic IP Addresses
Underutilized Amazon EBS Volumes

fault_tolerance
---------------
Amazon EBS Snapshots
Amazon EC2 Availability Zone Balance
Amazon RDS Backups
Amazon RDS Multi-AZ
Amazon Route 53 Deleted Health Checks
Amazon Route 53 Failover Resource Record Sets
Amazon Route 53 High TTL Resource Record Sets
Amazon Route 53 Name Server Delegations
Amazon S3 Bucket Logging
Auto Scaling Group Health Check
Auto Scaling Group Resources
Load Balancer Optimization
VPN Tunnel Redundancy

performance
-----------
Amazon EBS Provisioned IOPS (SSD) Volume Attachment Configuration
Amazon Route 53 Alias Resource Record Sets
CloudFront Content Delivery Optimization
High Utilization Amazon EC2 Instances
Large Number of EC2 Security Group Rules Applied to an Instance
Large Number of Rules in an EC2 Security Group
Overutilized Amazon EBS Magnetic Volumes
Service Limits

security
--------
AWS CloudTrail Logging
Amazon RDS Security Group Access Risk
Amazon Route 53 MX and SPF Resource Record Sets
Amazon S3 Bucket Permissions
IAM Password Policy
IAM Use
MFA on Root Account
Security Groups - Specific Ports Unrestricted
Security Groups - Unrestricted Access

In addition to the meta-data about categories and checks, actual TA check results and recommendations can also be pulled and refreshed using the API.

While Trusted Advisor is a great tool to quickly scan your AWS environment for inefficiencies, waste, potential cost savings, basic security issues, and best practices, it isn’t a “silver bullet” solution. It takes a specific set of AWS architectural understanding, skills, and experience to look at an entire application stack or ecosystem and ensure it is properly designed, built, and/or tuned to best utilize AWS and its array of complex and powerful building blocks. This where a company like 2nd Watch can add immense value in a providing a true “top down” cloud optimization. Our architects and engineers are the best in the business at ensuring applications and infrastructure are designed and implemented using AWS and cloud computing best practices with a fierce attention to detail and focus on our customers’ success in their business and cloud initiatives.

-Ryan Kennedy, Senior Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

Simplify Your AWS Deployments with a New Service Delivery Model

Have you seen our new service delivery model? Our Cloud Migration and Management Methodology, or CM3, is an innovative, catalogue-based approach to AWS deployments that reduces infrastructure implementation time for Enterprise businesses and significantly lowers risk.

CM3 uses repeatable “building blocks” of services that we select, deploy and manage for customers based on their specific requirements and needs, so it simplifies options and pricing for companies migrating applications and data to AWS. Each service is priced separately, creating a highly transparent offering to ease budgeting and speed approvals for companies needing to move quickly into the cloud.

Read More about CM3

Facebooktwittergoogle_pluslinkedinmailrss

2W Magazine Now Available

The summer issue of the 2W MGZN is now available! If you weren’t at the AWS Summit in NY to get your copy, you can still get all of your cloud computing news and market trends with the e-magazine, plus get pointers directly from the experts at AWS.

Contact the 2nd Watch team for more information.

Facebooktwittergoogle_pluslinkedinmailrss