1-888-317-7920 info@2ndwatch.com

The Hidden Savings of Cloud

There are an endless supply of articles talking about “the dangers of the hidden costs of cloud computing”.  Week after week there’s a new article from a new source highlighting (in the same way) how the movement to cloud won’t help the bottom line of a business because all of the “true costs” are not fully considered by most until it’s “too late”.  Too late for what?  These articles are an empty defensive move because of the inevitable movement our industry is experiencing toward cloud.  Now to be fair…are some things overlooked by folks?  Yes.  Do some people jump in too quickly and start deploying before they plan properly?  Yes.  Is cloud still emerging/evolving with architecture, deployment and cost models shifting on a quarterly (if not monthly) basis?  Yes.  But, this is what makes cloud so exciting. It’s a chance for us to rethink how we leverage technology, and isn’t that what we’ve done for years in IT?  Nobody talks about the hidden savings of cloud nor do they talk about the unspoken costs with status quo infrastructure.

Before jumping into an organization that was cloud-first, I worked for 13 years, in many roles, at an infrastructure/data center-first organization, and we did very well and helped many people.  However, as the years progressed and as cloud went from a gimmick to a fad to a buzzword to now a completely mainstream and enterprise IT computing platform, I saw a pattern developing in that traditional IT data center projects were costing more and more whereas cloud was looking like it cost less.  I’ll give you an unnamed customer example.

Four years ago a customer of mine who was growing their virtual infrastructure (VMware) and their storage infrastructure (EMC) deployed a full data center solution of compute, storage and virtualization that cost in the 4 million dollar range.  From then until now they added some additional capacity overall for about another 500K.  They also went through a virtual infrastructure platform (code) upgrade as well as software upgrades to the storage and compute platforms.  So this is the usual story…they made a large purchase (actually it was an operational lease, ironically like cloud could be), then added to it, and spent a ton of time and man hours doing engineering around the infrastructure just to maintain status quo.  I can quantify the infrastructure but not the man hours, but I’m sure you know what I’m talking about.

Four years later guess what’s happening – they have to go through it all over again! They need to refresh their SAN and basically redeploy everything – migrate all the data off, , validate, etc.  And how much is all of this?  6 to 7 million, plus a massive amount of services and about 4 months of project execution.  To be fair, they grew over 100%, made some acquisitions and some of their stuff has to be within their own data center.  However, there are hidden costs here in my opinion.  1)  Technology manufacturers have got customers into this cycle of doing a refresh every 3 years. How?  They bake the support (3 years’ worth) into the initial purchase so there is no operational expense. Then after 3 years, maintenance kicks in which becomes very expensive, and they just run a spreadsheet showing how if they just refresh they avoid “x” dollars in maintenance and how it’s worth it to just get new technology.  Somehow that approach still works.  There are massive amounts of professional services to executive the migration, a multi-month disruption to business, and no innovation from the IT department. It’s maintaining status quo.  The only reduction that can be realized on this regard are hardware and software decreases over time, which are historically based on Moore’s law. Do you want your IT budget and staff at the mercy of Moore’s law and technology manufacturers that use funky accounting to show you “savings”?

Now let’s look at the other side, and let’s be fair.  In cloud there can be hidden costs, but they exist in my mind only if you do one thing, forget about planning.  Even with cloud you need to take the same approach in doing a plan, design, build, migrate, and manage methodology to your IT infrastructure.  Just because cloud is easy to deploy doesn’t mean you should forget about the steps you normally take. But that isn’t a problem with cloud. It’s a problem with how people deploy into the cloud, and that’s an easy fix.  If you control your methodology there should be no hidden costs because you properly planned, architected and built your cloud infrastructure.  In theory this is true, but let’s look at the other side people fail to highlight…the hidden SAVINGS!!

With Amazon Web Services there have been 37 price reductions in the 7 years they have been selling their cloud platform.  That’s a little more than 5/year.  Do you get that on an ongoing basis after you spend 5 million on traditional infrastructure?  With this approach, once you sign up you are almost guaranteed to get a credit as some point in the lifecycle of your cloud infrastructure, and those price reductions are not based on Moore’s law. Those price reductions are based on AWS having very much the same approach to their cloud as they do their retail business.  Amazon wants to extend the value to customers that exists because of their size and scale, and they set margin limits on their services. Once they are “making too much” on a service or product they cut the cost. So as they grow and become more efficient and gain more market share with their cloud business, you save more!

Another bonus is that there are no refresh cycles or migration efforts every 3 years.  Once you migrate to the cloud AWS manages all the infrastructure migration efforts.  You don’t have to worry about your storage platform or your virtual infrastructure.  Everything from the hypervisor down is on AWS, and you manage your operating system and application.  What does that mean?  You are not incurring massive services every 3-4 years for a 3rd party to help you design/build/migrate your stuff, and you aren’t spending 3-4 months every few years on a disruption to your business and your staff not innovating.

-David Stewart, Solutions Architect

 

Facebooktwittergoogle_pluslinkedinmailrss

AWS S3-Glacier Lifecycle Management

Not long ago, 2nd Watch published an article on Amazon Glacier. In it Caleb provides a great primer on the capabilities of Glacier and the cost benefits.  Now that he’s taken the time to explain what it is, let’s talk about possible use cases for Glacier and how to avoid some of the pitfalls.  As Amazon says, “Amazon Glacier is optimized for data that is infrequently accessed and for which retrieval times of several hours are suitable.”  What immediately comes to mind are backups, but most AWS customers do this through EBS snapshots, which can restore in minutes, while a Glacier recall can take hours.  Rather than looking at the obvious, consider these use cases for Glacier Archival storage: compliance (regulatory or internal process), conversion of paper archives, and application retirement.

Compliance often forces organizations to retain records and backups for years, customers often mention a seven year retention policy based on regulatory compliance.  In seven years, a traditional (on premise) server can be replaced at least once, operating systems are upgraded several times, applications have been upgraded or modified, and backup hardware/software has been changed.  Add to that all the media that would need to be replaced/upgraded and you have every IT department’s nightmare – needing to either maintain old tape hardware or convert all the old backup tapes to the new hardware format (and hope too many haven’t degraded over the years).  Glacier removes the need to worry about the hardware, the media, and the storage fees (currently 1¢ per GB/month in US-East) are tiny compared to the cost of media and storage on premise.  Upload your backup file(s) to S3, setup a lifecycle policy, and you have greatly simplified your archival process while keeping regulatory compliance.

So how do customers create these lifecycle policies so their data automatically moves to Glacier?  From the AWS Management Console, once you have an S3 bucket there is a Property called ‘Lifecycle’ that can manage the migration to Glacier (and possible deletion as well).  Add a rule (or rules) to the S3 bucket that can migrate files based on a filename prefix, how long since their creation date, or how long from an effective date (perhaps 1 day from the current date for things you want to move directly to Glacier).  For the example above, perhaps customers take backup files, move them to S3, then have them move to Glacier after 30 days and delete after 7 years.

Lifecycle Rule

Before we go too far and setup lifecycles, however, one major point should be highlighted: Amazon charges customers based on GB/month stored in Glacier and a one-time fee for each file moved from S3 to Glacier.  Moving a terabyte of data from S3 to Glacier could cost little more than $10/month in storage fees, however, if that data is made up of 1k log files, the one-time fee for that migration can be more than $50,000!  While this is an extreme example, consider data management before archiving.  If at all possible, compress the files into a single file (zip/tar/rar), upload those compressed files to S3 and then archive to Glacier.

-Keith Homewood, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

Highly Available (HA) vs. Highly Reliable (HR)

The other day I was working with my neighbor’s kid on his baseball fundamentals and I found myself repeating the phrase “Remember the Basics.”  Over the course of the afternoon we played catch, worked on swinging the bat, and fielded grounders until our hands hurt. As the afternoon slipped into the evening hours, I started to see that baseball and my business have several similarities.

My business is Cloud Computing, and my company, 2nd Watch, is working to pioneer Cloud Adoption with Enterprise businesses.  As we discover new ways of integrating systems, performing workloads, and running applications, it is important for us not to forget the fundamentals. One of the most basic elements of this is using the proper terminology. I’ve found that in many cases my customers, partners, and even my colleagues can have different meanings for many of the same terms. One example that comes up frequently is the difference between having a Highly Available infrastructure vs. Highly Reliable infrastructure. I want to bring special attention to these two terms and help to clearly define their meaning.

High Availability (HA) is based on designing and implementing systems that are proactively created to handle the operational capacity to meet their required performance. For example, within Cloud Computing we leverage tools like Elastic Load Balancing and Auto Scaling to automate the scaling of infrastructure to handle the variable demand for web sites. As traffic increases, servers are spun up to handle the load and vice versa as it decreases.  If a user cannot access a website or it is deemed “unavailable,” then you risk the potential loss of readership, sales, or the attrition of customers.

On the other hand, Highly Reliable (HR) systems have to do with your Disaster Recovery (DR) model and how well you prepare for catastrophic events. In Cloud Computing, we design for failure because anything can happen at any time. Having a proper Disaster Recovery plan in place will enable your business to keep running if problems arise. Any company with sensitive IT operations should look into a proper DR strategy, which will support their company’s ability to be resilient in the event of failure. While a well-planned DR schema may cost you more money upfront, being able to support both your internal and external customers will pay off in spades if an event takes place that requires you to fail over.

In today’s business market it is important to take the assumptions out of our day-to-day conversations and make sure that we’re all on the same page. The difference between being Highly Available and Highly Reliable systems is a great example of this. By simply going back to the fundamentals, we can easily make sure that our expectations are met and our colleagues, partners, and clients understand both our spoken & written words.

-Blake Diers, Cloud Sales Executive

Facebooktwittergoogle_pluslinkedinmailrss

Elasticity: Matching Capacity to Demand

According to IDC, a typical server utilizes an average of 15% of its capacity. That means 85% of a company’s capital investment can be categorized as waste. While virtualization can increase server capacity to as high as 80%, the company is still faced with 20% waste under the best case scenario. The situation gets worse when companies have to forecast demand for specific periods; e.g., the holiday season in December. If they buy too much capacity, they overspend and create waste. If they buy too little, they create customer experience and satisfaction issues.

The elasticity of Amazon Web Services (AWS) removes the need to forecast demand and buy capacity up-front. Companies can scale their infrastructure up and down as needed to match capacity to demand. Common use cases include: a) fast growth (new projects, startups); b) on and off (occasionally need capacity); c) predictable peaks (specific demand at specific times); and d) unpredictable peaks (demand exceeds capacity). Use the elasticity of AWS to eliminate waste and reduce costs over traditional IT, while providing better experiences and performance to users.

-Josh Lowry, General Manager Western U.S.

 

Facebooktwittergoogle_pluslinkedinmailrss

Believe the Hype

A lot of companies have been dipping their toe in the proverbial Cloud waters for some time, looking at ways to help their businesses be more efficient, agile and innovative. There have been a lot of articles published recently about cloud being overhyped, cloud being the new buzzword for everything IT, or about cloud being just a fad. The bottom line is that cloud is enabling a completely new way to conduct business, one that’s not constrained but driven through a completely new business paradigm and should not be overlooked but leveraged, and leveraged immediately.

    • Cyclical Business Demand – We’ve been helping customers architect, build, deploy and manage environments for unpredictable or spikey demand. This has become more prevalent with the proliferation of mobile devices and social media outlets where you never know when the next surge in traffic will come.
    • Datacenter Transformation – Helping customers figure out what can move to the public cloud and what should stay on premise is a typical engagement for us. As the continued migration from on premise technology to cloud computing accelerates, these approaches and best practices are helping customers not just optimize what they have today but also ease the burden of trying to make an all or nothing decision.
    • Financial Optimization – Designing a way to help customers understand their cloud finances and then giving them the ability to create financial models for internal chargebacks and billing can sometimes be overlooked upfront. We’ve developed solutions to help customers do both where customers are seeing significant cost savings.

(more…)

Facebooktwittergoogle_pluslinkedinmailrss

How to Spin Up an AWS Virtual Machine

If you are a user like me, it may actually take you longer to make a cup of coffee than creating an AWS account. Go to the AWS EC2 login site to create an account. You’ve got 12 months of free tier service with your new account, and you can cancel if you decide it’s not for you. Careful, however, if you ignore your VM for more than three months, Amazon may decide the issue for you, though they’ll send you an email warning 30 days prior. Account created? Let’s build a server!

SpinUpServer 1

Hit the Quick Launch Wizard (middle left) and you’ve got a few things to fill out before your VM comes to life. First, name your server (top), then name your secure key (middle) and finally choose the operating system you’d like to use. For now, just choose the Amazon Linux AMI since some of the other choices, notably Windows Server, have an added cost even during your free tier. Make sure to download your security key and save it where you can find it again.

SpinUpServer 2

Once you’ve selected your basic options, you’ll get your first glimpse of the EC2 dashboard (above). See the big blue “Launch Instance” button? Hit that. You’ll see a screen informing you that your sever is being created. It’s time to get that cup of coffee. When you come back, you can click back to your EC2 dashboard view and see:

EC2 Dashboard3

That’s it, you’re all set! You can start using  your new server immediately. There are a couple different SSH options available, depending on whether you’re using Linux or Windows. If you use SSH through Linux, all you need is to go to your terminal line and type sudo apt-get install openssh-server. But if you’re a Windows fan, there are plenty of free add-on SSH packages, such as Cygwin or PuTTY. To get accesst to your server via SSH, there are some pieces of information you’ll need, but Amazon has made them easy to find. Just check the box next to your server in the screen above, and a scroll window pops up underneath with all the information you’ll need, including your VM’s name, your private IP address, and more:

SpinUpServer 4

If you’re unsure what you’d like to do with your new VM, AWS has excellent and easy-to-digest Getting Started guides that cover all the bases:

AWS Getting Started5

The above example walks you through one of the first steps you should take, which is deciding who has access to your VM. There are also guides on locating and using your IP address, configuring your firewall (don’t worry there’s a default setting), setting up roles, user groups, and management certificates, as well as implementing any of the many services that come with your Free Tier package:

SpinUpServer 6

To remove your new VM, just return to that EC2 management screen and click the Instance Action link. From there hit Terminate and say yes to the “Are you sure?” question. That’s all there is to it.

Try out AWS on some of your in-house application workloads. Check out the default monitoring and alert services. Move data back and forth and record response times. Create a few more VMs and use them to build a basic virtual server farm. Try some of the more popular services shown above like S3, CloudWatch, or Amazon RDS. In other words, take the time to dig into AWS and really see what it can do for you and your organization. As always feel free to ask us at 2nd Watch questions as you go along. We are happy to help!

Facebooktwittergoogle_pluslinkedinmailrss