1-888-317-7920 info@2ndwatch.com

Enabling Growth and Watching the Price

One of the main differentiators between traditional on premise data centers and Cloud Computing through AWS is the speed at which businesses can scale their environment.  So often in enterprise environments, IT and business struggle to have adequate capacity when they need it.  Facilities run out of power and cooling, vendors cannot provide systems fast enough or the same type of system is not available, and business needs sometimes come without warning.  AWS scales out to meet these demands in every area.

Compute capacity is expanded, often automatically with auto scaling groups, which add additional server instances as demands dictate.  With auto scaling groups, demands on the environment cause more systems to come online.  Even without auto scaling, systems can be cloned with Amazon Machine Images (AMIs) and started to meet capacity, expand to a new region/geography, or even be shared with a business partner to move collaboration forward.

Beyond compute capacity, storage capacity is a few mouse clicks (or less) away from business needs as well.  Using Amazon S3, storage capacity is simply allocated as it is used dynamically.  Customers do not need to do anything more than add content and storage, and that is far easier than adding disk arrays!  With Elastic Block Storage (EBS), these are added as quickly as compute instances are.  Storage can be added and attached to live instances or replicated across an environment as capacity is demanded.

Growth is great, and we’ve written a great deal about how to take advantage of the elastic nature of AWS before, but what about the second part of the title?  Price!  It’s no secret that as customers use more AWS resources, the price increases.  The more you use, the more you pay; simple.  The differentiators come into play with that same elastic nature; when demand drops, resources can be released and costs saved.  Auto scaling can retire instances as easily as it adds them, storage can be removed when no longer needed, and with usage of resources, bills can actually shrink as you become more proficient in AWS.  (Of course, 2ndWatch Managed Services can also help with that proficiency!)  With traditional data centers, once resources are purchased, you pay the price (often a large one). With the Cloud, resources can be purchased as needed, at just a fraction of the price.

IT wins and business wins – enterprise level computing at its best!

-Keith Homewood, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

What is CloudTrail?

Amazon Web Services™ (AWS) released a new service at re:invent a few weeks ago that will have operations and security managers smiling.  CloudTrail is a web service that records AWS API calls and stores the logs in S3.  This provides organizations the visibility they need to their AWS infrastructure to maintain proper governance of changes to their environment.

2nd Watch was pleased to announce support for CloudTrail in our launch of our 2W Atlas product.  2W Atlas is a product that organizes and visualizes AWS resources and output data.  Enterprise organizations need tools and services built for the cloud to properly manage these new architectures.  2W Atlas provides organizations with a tool that enables their divisions and business units to organize and manage the CloudTrail data for their individual group.

2nd Watch is committed to assisting enterprise organizations with the expertise and tools to make the cloud work for them.  The tight integration 2nd Watch has developed with CloudTrail and Atlas is further proof of our expertise in bringing enterprise solutions that our customers demand.

To learn more about 2W Atlas or CloudTrail, Contact Us and let us know how we can help.

-Matt Whitney, Sales Executive

Facebooktwittergoogle_pluslinkedinmailrss

Disaster Recovery on Amazon Web Services

After a seven-year career at Cisco, I am thrilled to be working with 2nd Watch to help make cloud work for our customers.  Disaster recovery is a great use case for companies looking for a quick cloud win.  Traditional on-premise backup and disaster recovery technologies are complex and can require major capital investments.  I have seen many emails from IT management to senior management explaining the risk to their business if they do not spend money on backup/DR.  The associated costs with on-premise solutions for DR are often put off to next year’s budget and seem to always get cut midway throughout the year.  When the business does invest in backup/DR, there are countless corners cut in order to maximize the reliability and performance with a limited budget.

The public cloud is a great resource for addressing the requirements of backup and disaster recovery.  Organizations can avoid the sunk costs of data center infrastructure (power, cooling, flooring, etc.) while having all of the performance and resources available for their growing needs.  Atlanta based What’s Up Interactive saved over $1 million with their AWS DR solution (case study here).

I will highlight a few of the top benefits any company can expect when leveraging AWS for their disaster recovery project.

1. Eliminate costly tape backups that require transporting, storing, and retrieving the tape media.  This is replaced by fast disk-based storage that provides the performance needed to run your mission-critical applications.

2.  AWS provides an elastic solution that can scale to the growth of data or application requirements without the costly capital expenses of traditional technology vendors.   Companies can also expire and delete archived data according to organizational policy.  This allows companies to pay for only what they use.

3.  AWS provides a secure platform that helps companies meet compliance requirements due to easy access to data for deadlines that is secure and durable.

Every day we help new customers leverage the cloud to support their business.  Our team of Solutions Architects and Cloud Engineers can assist you in creating a plan to reduce risk in your current backup/DR solution.  Let us know how we can help you get started in your journey to the cloud.

-Matt Whitney, Cloud Sales Executive

Facebooktwittergoogle_pluslinkedinmailrss

The Hidden Savings of Cloud

There are an endless supply of articles talking about “the dangers of the hidden costs of cloud computing”.  Week after week there’s a new article from a new source highlighting (in the same way) how the movement to cloud won’t help the bottom line of a business because all of the “true costs” are not fully considered by most until it’s “too late”.  Too late for what?  These articles are an empty defensive move because of the inevitable movement our industry is experiencing toward cloud.  Now to be fair…are some things overlooked by folks?  Yes.  Do some people jump in too quickly and start deploying before they plan properly?  Yes.  Is cloud still emerging/evolving with architecture, deployment and cost models shifting on a quarterly (if not monthly) basis?  Yes.  But, this is what makes cloud so exciting. It’s a chance for us to rethink how we leverage technology, and isn’t that what we’ve done for years in IT?  Nobody talks about the hidden savings of cloud nor do they talk about the unspoken costs with status quo infrastructure.

Before jumping into an organization that was cloud-first, I worked for 13 years, in many roles, at an infrastructure/data center-first organization, and we did very well and helped many people.  However, as the years progressed and as cloud went from a gimmick to a fad to a buzzword to now a completely mainstream and enterprise IT computing platform, I saw a pattern developing in that traditional IT data center projects were costing more and more whereas cloud was looking like it cost less.  I’ll give you an unnamed customer example.

Four years ago a customer of mine who was growing their virtual infrastructure (VMware) and their storage infrastructure (EMC) deployed a full data center solution of compute, storage and virtualization that cost in the 4 million dollar range.  From then until now they added some additional capacity overall for about another 500K.  They also went through a virtual infrastructure platform (code) upgrade as well as software upgrades to the storage and compute platforms.  So this is the usual story…they made a large purchase (actually it was an operational lease, ironically like cloud could be), then added to it, and spent a ton of time and man hours doing engineering around the infrastructure just to maintain status quo.  I can quantify the infrastructure but not the man hours, but I’m sure you know what I’m talking about.

Four years later guess what’s happening – they have to go through it all over again! They need to refresh their SAN and basically redeploy everything – migrate all the data off, , validate, etc.  And how much is all of this?  6 to 7 million, plus a massive amount of services and about 4 months of project execution.  To be fair, they grew over 100%, made some acquisitions and some of their stuff has to be within their own data center.  However, there are hidden costs here in my opinion.  1)  Technology manufacturers have got customers into this cycle of doing a refresh every 3 years. How?  They bake the support (3 years’ worth) into the initial purchase so there is no operational expense. Then after 3 years, maintenance kicks in which becomes very expensive, and they just run a spreadsheet showing how if they just refresh they avoid “x” dollars in maintenance and how it’s worth it to just get new technology.  Somehow that approach still works.  There are massive amounts of professional services to executive the migration, a multi-month disruption to business, and no innovation from the IT department. It’s maintaining status quo.  The only reduction that can be realized on this regard are hardware and software decreases over time, which are historically based on Moore’s law. Do you want your IT budget and staff at the mercy of Moore’s law and technology manufacturers that use funky accounting to show you “savings”?

Now let’s look at the other side, and let’s be fair.  In cloud there can be hidden costs, but they exist in my mind only if you do one thing, forget about planning.  Even with cloud you need to take the same approach in doing a plan, design, build, migrate, and manage methodology to your IT infrastructure.  Just because cloud is easy to deploy doesn’t mean you should forget about the steps you normally take. But that isn’t a problem with cloud. It’s a problem with how people deploy into the cloud, and that’s an easy fix.  If you control your methodology there should be no hidden costs because you properly planned, architected and built your cloud infrastructure.  In theory this is true, but let’s look at the other side people fail to highlight…the hidden SAVINGS!!

With Amazon Web Services there have been 37 price reductions in the 7 years they have been selling their cloud platform.  That’s a little more than 5/year.  Do you get that on an ongoing basis after you spend 5 million on traditional infrastructure?  With this approach, once you sign up you are almost guaranteed to get a credit as some point in the lifecycle of your cloud infrastructure, and those price reductions are not based on Moore’s law. Those price reductions are based on AWS having very much the same approach to their cloud as they do their retail business.  Amazon wants to extend the value to customers that exists because of their size and scale, and they set margin limits on their services. Once they are “making too much” on a service or product they cut the cost. So as they grow and become more efficient and gain more market share with their cloud business, you save more!

Another bonus is that there are no refresh cycles or migration efforts every 3 years.  Once you migrate to the cloud AWS manages all the infrastructure migration efforts.  You don’t have to worry about your storage platform or your virtual infrastructure.  Everything from the hypervisor down is on AWS, and you manage your operating system and application.  What does that mean?  You are not incurring massive services every 3-4 years for a 3rd party to help you design/build/migrate your stuff, and you aren’t spending 3-4 months every few years on a disruption to your business and your staff not innovating.

-David Stewart, Solutions Architect

 

Facebooktwittergoogle_pluslinkedinmailrss

AWS S3-Glacier Lifecycle Management

Not long ago, 2nd Watch published an article on Amazon Glacier. In it Caleb provides a great primer on the capabilities of Glacier and the cost benefits.  Now that he’s taken the time to explain what it is, let’s talk about possible use cases for Glacier and how to avoid some of the pitfalls.  As Amazon says, “Amazon Glacier is optimized for data that is infrequently accessed and for which retrieval times of several hours are suitable.”  What immediately comes to mind are backups, but most AWS customers do this through EBS snapshots, which can restore in minutes, while a Glacier recall can take hours.  Rather than looking at the obvious, consider these use cases for Glacier Archival storage: compliance (regulatory or internal process), conversion of paper archives, and application retirement.

Compliance often forces organizations to retain records and backups for years, customers often mention a seven year retention policy based on regulatory compliance.  In seven years, a traditional (on premise) server can be replaced at least once, operating systems are upgraded several times, applications have been upgraded or modified, and backup hardware/software has been changed.  Add to that all the media that would need to be replaced/upgraded and you have every IT department’s nightmare – needing to either maintain old tape hardware or convert all the old backup tapes to the new hardware format (and hope too many haven’t degraded over the years).  Glacier removes the need to worry about the hardware, the media, and the storage fees (currently 1¢ per GB/month in US-East) are tiny compared to the cost of media and storage on premise.  Upload your backup file(s) to S3, setup a lifecycle policy, and you have greatly simplified your archival process while keeping regulatory compliance.

So how do customers create these lifecycle policies so their data automatically moves to Glacier?  From the AWS Management Console, once you have an S3 bucket there is a Property called ‘Lifecycle’ that can manage the migration to Glacier (and possible deletion as well).  Add a rule (or rules) to the S3 bucket that can migrate files based on a filename prefix, how long since their creation date, or how long from an effective date (perhaps 1 day from the current date for things you want to move directly to Glacier).  For the example above, perhaps customers take backup files, move them to S3, then have them move to Glacier after 30 days and delete after 7 years.

Lifecycle Rule

Before we go too far and setup lifecycles, however, one major point should be highlighted: Amazon charges customers based on GB/month stored in Glacier and a one-time fee for each file moved from S3 to Glacier.  Moving a terabyte of data from S3 to Glacier could cost little more than $10/month in storage fees, however, if that data is made up of 1k log files, the one-time fee for that migration can be more than $50,000!  While this is an extreme example, consider data management before archiving.  If at all possible, compress the files into a single file (zip/tar/rar), upload those compressed files to S3 and then archive to Glacier.

-Keith Homewood, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

Highly Available (HA) vs. Highly Reliable (HR)

The other day I was working with my neighbor’s kid on his baseball fundamentals and I found myself repeating the phrase “Remember the Basics.”  Over the course of the afternoon we played catch, worked on swinging the bat, and fielded grounders until our hands hurt. As the afternoon slipped into the evening hours, I started to see that baseball and my business have several similarities.

My business is Cloud Computing, and my company, 2nd Watch, is working to pioneer Cloud Adoption with Enterprise businesses.  As we discover new ways of integrating systems, performing workloads, and running applications, it is important for us not to forget the fundamentals. One of the most basic elements of this is using the proper terminology. I’ve found that in many cases my customers, partners, and even my colleagues can have different meanings for many of the same terms. One example that comes up frequently is the difference between having a Highly Available infrastructure vs. Highly Reliable infrastructure. I want to bring special attention to these two terms and help to clearly define their meaning.

High Availability (HA) is based on designing and implementing systems that are proactively created to handle the operational capacity to meet their required performance. For example, within Cloud Computing we leverage tools like Elastic Load Balancing and Auto Scaling to automate the scaling of infrastructure to handle the variable demand for web sites. As traffic increases, servers are spun up to handle the load and vice versa as it decreases.  If a user cannot access a website or it is deemed “unavailable,” then you risk the potential loss of readership, sales, or the attrition of customers.

On the other hand, Highly Reliable (HR) systems have to do with your Disaster Recovery (DR) model and how well you prepare for catastrophic events. In Cloud Computing, we design for failure because anything can happen at any time. Having a proper Disaster Recovery plan in place will enable your business to keep running if problems arise. Any company with sensitive IT operations should look into a proper DR strategy, which will support their company’s ability to be resilient in the event of failure. While a well-planned DR schema may cost you more money upfront, being able to support both your internal and external customers will pay off in spades if an event takes place that requires you to fail over.

In today’s business market it is important to take the assumptions out of our day-to-day conversations and make sure that we’re all on the same page. The difference between being Highly Available and Highly Reliable systems is a great example of this. By simply going back to the fundamentals, we can easily make sure that our expectations are met and our colleagues, partners, and clients understand both our spoken & written words.

-Blake Diers, Cloud Sales Executive

Facebooktwittergoogle_pluslinkedinmailrss