1-888-317-7920 info@2ndwatch.com

Today is the Day – Cloud Cost

Cloud has been positioned as the panacea for all of the ills in the world.  The hype often leaves experienced IT professionals skeptical of the benefits of leveraging public clouds.  The most common objection to public cloud I hear is, “Wait until you get your first bill from (public cloud).”    I can understand the fear and consternation of suddenly having a huge monthly bill that no one budgeted. However, there are great 3rd party tools available that allow you to forecast and track your budget and usage, like 2W Insight, which make that uncomfortable situation avoidable.

With the recent price cuts by Amazon Web Services, Microsoft and Google, the TCO argument from the traditional hardware manufacturers is losing steam.  When is the last time any major hardware vendor cut its prices to its customers by 40-60% overnight? There is no better time to begin evaluating and leveraging the cloud, and a great place to start is with a TCO analysis.

2nd Watch has a services practice that can assist your organization in building an accurate monthly forecast of your estimated cloud costs while also providing a full TCO analysis.

-Matt Whitney, Cloud Sales Executive

Facebooktwittergoogle_pluslinkedinmailrss

A Price War Erupts in Cloud Services

Read The Wall Street Journal’s “A Price War Erupts in Cloud Services” to see how Amazon, Microsoft, and Google users reap benefits of price cuts while the companies battle over costs. 2nd Watch EVP of Sales & Marketing, Matt Gerber, also weighs in on the recent price reductions and what it means for customers.

Facebooktwittergoogle_pluslinkedinmailrss

Common Cloud Misperceptions

Hello Cloud fans,

Today I’d like to talk to you about some commonly perceived cloud myths that I’ve encountered in my travels, helping to solve the challenges of getting to the cloud.

It’s simple to use the cloud to SaaS-i-fy my software platform.

A common misconception is that it’s immediately cheaper and easier to provide a software product or service as a subscribed cloud based service over the Internet.  However, what some teams can often under-estimate is the operational dependencies it takes to support an application that was built under the assumption that it was being installed into an environment that had all its security, authentication, storage, backup, and recovery already taken care of.

There is no magic support staff in the cloud for all the operational components your software still requires.  While you may gain efficiency in dynamically scaling your environment and better streamlining individual client support costs, the real cloud savings come in better understanding all the options available to you in the AWS services catalog and how best to leverage them to reduce costs as you refactor your software platform. All coding is not evil!

Deploying enterprise apps in the cloud is just about pressing some buttons.

When most customers considering deploying their first application into the cloud, they tend to underestimate the effort that is involved in actually building, ing, and deploying that solution.  Typically, in a co-lo type deployment there is a great deal of time taken up in the procurement and logistics with getting environments setup, which can cause support challenges and shortcuts to be taken.

Because cloud environments are more dynamic, resources can be created in a much shorter timeframe, but it requires synchronized activity over a more compressed timeframe from a number of different parties including your IaaS provider, your internal application team, and your 3rd party software ISV.  Depending on your 3rd party’s relative experience with the AWS cloud, this can be a rewarding or sometimes frustrating and drawn out process.  The good news is that once you figure out how to deploy that app, you can easily repeat the process with automated cloud formation and auto-scaling benefits.  Fortunately here at 2nd Watch, we’ve done quite a few application deployments in our cloud life!

Becoming cloud savvy is as easy as opening an Amazon account and getting to work spinning up instances.

One of the most expensive mistakes I see with companies approaching the cloud for the first time is having no plan on how to design and create their first environments on the AWS cloud.  I mean, how hard could it be? This is Amazon after all.  Well, the truth is Amazon Web Services has continued to innovate over the 10 or so years they’ve been doing this cloud thing, and as a result they now have hundreds of services catering to a vast universe of data-center needs.

Properly recognizing both the good design elements of your current IT environment and how best to take advantage of these AWS services is critical to achieving the phenomenal benefits you can realize in operational efficiency, high-availability, and designing best practice environments.  Like many other significant choices in life, you can save yourself a lot of pain by getting the advice of a pro you trust before plunging into an unknown subject.  Those that just jump in and start turning (and leaving!) stuff on 24×7 without a game plan can wake up with a nasty surprise when they see their first bill.

Let me know what you think in the comments!

-C. Caleb Carter, Solutions Architect

Facebooktwittergoogle_pluslinkedinmailrss

Getting the Most Bang for your AWS Buck

Cloud computing continues to redefine itself as more customers begin their “Journey to the cloud.” There are many value propositions between different cloud providers, including, but are not limited to; agility, cost savings, time to market, increased security, flexibility, elasticity, economies of scale, a more effective support model, and the list continues.

As we start to understand the cloud provider landscape and take a snapshot into who is going to be the market leaders in the future, it is easy to see that Amazon Web Service (AWS) will be amongst that leader board. AWS is not only defining the IT infrastructure as a Service market, but it is also changing the consumption model for IT.

Due to the drastic changes in the procurement process of infrastructure, more business level executives (CFO, COO, CEO) are being pulled into the strategic decision making process of acquiring infrastructure as a service. For those executives that are not familiar with cloud services, I would like to offer a few tips that will not only support your overall corporate goals, but will allow you to make informed decisions in this highly evolving landscape of cloud computing.

  • Train yourself on the new model – Changing your company’s IT spend from a capital expenditure model to an operational model can be challenging. Consult with your AWS account manager on best practices. Discuss your goals with a Premier Partner within AWS’ ecosystem. Converse freely within your company to understand how you can address potential roadblocks before they happen. The procurement department and the finance teams within your organization will have great insight on how to help with this process.
  • Be cautious of jumping in with two feet – Many organizations are starting their journey with a hybrid model (on-premise + cloud). Test/DEV, disaster recovery, or non-production applications are all great candidates for moving to the cloud. Your ERP system and your mission critical applications may still have some lifecycle on their existing infrastructure and should remain on-premise until an evolutionary plan can be developed. Start small, think big and enable your staff to learn the technology, while still supporting your organization’s goals and business objectives.
  • Leverage the free tier – AWS offers a free usage tier and has no restrictions on your particular use-case. Spin up a new application, train your engineers on the new platform, or simply an existing application. The choices are yours to make, and it will not affect your bottom line.
  • Reserve Instances – For the applications that are running in a steady state or have a consistent operating floor, purchase reversed capacity. This will immediately give you cost efficiency between 40%-70% off your on-demand billing.
  • Get rid of excess capacity – Many organizations are accustomed to procuring IT infrastructure that has been over provisioned to meet the demand of peaks and spikes in their business. With cloud computing, you do not need to allocate excess capacity. With the proper architecture, it will be waiting for you when you need it. Optimize your environment(s) and leverage one of the grea advantages of cloud computing.
  • Tiered Pricing – For heavy users of cloud services, AWS offers tiered pricing to customers that consume web services up to certain thresholds. Review these tiers and forecast your roadmap to meet these levels before reporting deadlines (fiscal year ends).

If you follow these guidelines, your business will be sure to reach cost efficiency in the cloud.

-Blake Diers, Alliance Manager

Facebooktwittergoogle_pluslinkedinmailrss

AWS Lowers Cloud Computing Costs AGAIN

Last week, AWS announced their 42nd price reduction since 2008. This significant cut impacts many of their most popular services including EC2, S3, EMR, RDS and ElastiCache. These savings range from 10% to 65%, depending on the service you use.  As you can see from the example below, this customer scenario results in savings of almost $150,000 annually, which represents a 36% savings on these services!!!

This major move not only helps existing AWS users but makes the value proposition of shifting from on-premise to the AWS cloud even greater.  If you are not on AWS now, contact us to learn how we can help you take advantage of this new pricing and everything AWS has to offer.

As an AWS Premier Consulting Partner, our mission is to get you migrated to and running efficiently in the Amazon Web Services (AWS) cloud. The journey to get into the AWS cloud can be complicated, but we’ll guide you along the way and take it from there, so you can concentrate on running your business rather than your IT infrastructure.

2nd Watch provides:

  • Fast and Flawless enterprise grade cloud migration
  • Cloud IT Operations Management that goes beyond basic infrastructure management
  • Cloud cost/usage tracking and analytics that helps you control and allocate costs across your enterprise

AWS Price Reduction_TimH

Facebooktwittergoogle_pluslinkedinmailrss

Increasing Your Cloud Footprint

The jump to the cloud can be a scary proposition.  For an enterprise with systems deeply embedded in traditional infrastructure like back office computer rooms and datacenters the move to the cloud can be daunting. The thought of having all of your data in someone else’s hands can make some IT admins cringe.  However, once you start looking into cloud technologies you start seeing some of the great benefits, especially with providers like Amazon Web Services (AWS).  The cloud can be cost-effective, elastic and scalable, flexible, and secure.  That same IT admin cringing at the thought of their data in someone else’s hands may finally realize that AWS is a bit more secure than a computer rack sitting under an employee’s desk in a remote office.  Once the decision is finally made to “try out” the cloud, the planning phase can begin.

Most of the time the biggest question is, “How do we start with the cloud?”  The answer is to use a phased approach.  By picking applications and workloads that are less mission critical, you can try the newest cloud technologies with less risk.  When deciding which workloads to move, you should ask yourself the following questions; Is there a business need for moving this workload to the cloud?  Is the technology a natural fit for the cloud?  What impact will this have on the business? If all those questions are suitably answered, your workloads will be successful in the cloud.

One great place to start is with archiving and backups.  These types of workloads are important, but the data you’re dealing with is likely just a copy of data you already have, so it is considerably less risky.  The easiest way to start with archives and backups is to try out S3 and Glacier.  Many of today’s backup utilities you may already be using, like Symantec Netbackup  and Veeam Backup & Replication, have cloud versions that can directly backup to AWS. This allows you to use start using the cloud without changing much of your embedded backup processes.  By moving less critical workloads you are taking the first steps in increasing your cloud footprint.

Now that you have moved your backups to AWS using S3 and Glacier, what’s next?  The next logical step would be to try some of the other services AWS offers.  Another workload that can often be moved to the cloud is Disaster Recovery.   DR is an area that will allow you to more AWS services like VPC, EC2, EBS, RDS, Route53 and ELBs.  DR is a perfect way to increase your cloud footprint because it will allow you to construct your current environment, which you should already be very familiar with, in the cloud.  A Pilot Light DR solution is one type of DR solution commonly seen in AWS.  In the Pilot Light scenario the DR site has minimal systems and resources with the core elements already configured to enable rapid recovery once a disaster happens.  To build a Pilot Light DR solution you would create the AWS network infrastructure (VPC), deploy the core AWS building blocks needed for the minimal Pilot Light configuration (EC2, EBS, RDS, and ELBs), and determine the process for recovery (Route53).  When it is time for recovery all the other components can be quickly provisioned to give you a fully working environment. By moving DR to the cloud you’ve increased your cloud footprint even more and are on your way to cloud domination!

The next logical step is to move Test and Dev environments into the cloud. Here you can get creative with the way you use the AWS technologies.  When building systems on AWS make sure to follow the Architecting Best Practices: Designing for failure means nothing will fail, decouple your components, take advantage of elasticity, build security into every layer, think parallel, and don’t fear constraints! Start with proof-of-concept (POC) to the development environment, and use AWS reference architecture to aid in the learning and planning process.  Next your legacy application in the new environment and migrate data.  The POC is not complete until you validate that it works and performance is to your expectations.  Once you get to this point, you can reevaluate the build and optimize it to exact specifications needed. Finally, you’re one step closer to deploying actual production workloads to the cloud!

Production workloads are obviously the most important, but with the phased approach you’ve taken to increase your cloud footprint, it’s not that far of a jump from the other workloads you now have running in AWS.   Some of the important things to remember to be successful with AWS include being aware of the rapid pace of the technology (this includes improved services and price drops), that security is your responsibility as well as Amazon’s, and that there isn’t a one-size-fits-all solution.  Lastly, all workloads you implement in the cloud should still have stringent security and comprehensive monitoring as you would on any of your on-premises systems.

Overall, a phased approach is a great way to start using AWS.  Start with simple services and traditional workloads that have a natural fit for AWS (e.g. backups and archiving).  Next, start to explore other AWS services by building out environments that are familiar to you (e.g. DR). Finally, experiment with POCs and the entire gambit of AWS to benefit for more efficient production operations.  Like many new technologies it takes time for adoption. By increasing your cloud footprint over time you can set expectations for cloud technologies in your enterprise and make it a more comfortable proposition for all.

-Derek Baltazar, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss