1-888-317-7920 info@2ndwatch.com

VPC Peering via CloudFormation

Here is the use case:   Imagine you have several VPCs (target VPCs) that you want to attach to a services VPC.  In the services VPC you have an application license server, or a Domain Controller, or any other workload that instances in each of the other target VPCs need access to.  Every time a new target VPC is created you want to have an easy way to connect that VPC to the services VPC.  You also want to use CloudFormation for creating the peer between the target VPCs and the services VPC because it’s easier for you to manage and keep track of the peering connections if they are tied to a CloudFormation stack.

In the picture below, the red arrows represent the target VPCs and the green arrow represents the services VPC.  Here we’ll discuss a template that will connect VPC A (services VPC) to VPC G (a target VPC).  This template, with slight modifications, could be used for attaching any target VPC (B – G) with the services VPC.

Image1

The requirement for a template that would do this would be one that created the following resources;

  • A VPC Peering Connection
  • A VPC route from the services VPC to the target VPC
  • A VPC route from the target VPC to the services VPC.

The two VPC routes listed above could be several routes depending on your routing requirements.  Say you had several route tables – one for public subnets and one for private subnets – you would want to account for all of them in the template to make sure they know the proper route to the services VPC.

In this example, since both VPCs (the target and the service VPC) have already been created, the parameters in your CloudFormation template should account for inputting these values.  Here we have parameters for the services VPC as well as the target VPC.

Image2

Expanding the parameters section we see that the parameter we specified as “ServicesVPC” and “targetVPCtoPeerto” are actually VPC IDs that will be used in the resource section of our template.

Image3

Setting these two values as input parameters will let you use this template to theoretically connect any two VPCs together as long as you can provide the VPC IDs.  If we were to expand the “RouteTableForTargetVPCYouArePeeringto” we would see the parameter is actually a route table ID.

Image4

Since this is a simplified example, we are only peering two VPCs that have a single route table.  If we wanted to peer VPCs that had additional route tables, say you had one for private traffic and one for public traffic, you would just add another parameter for the additional route table and tie that to a corresponding resource in the resource section of the template.  It’s also important to remember that the parameters can be created and declared but don’t necessarily have to be used or referenced in your template.  Additionally, you can see “RouteTableForServicesVPC” is pre-filled because we already are using it with existing peers.

Moving down to the resources section of the template we see that the only resources that need to be created would be the VPC Peering Connection, a VPC route from the services VPC to the target VPC, and a VPC route from the target VPC to the services VPC.

Image5

Expanding the resources, we can see that the VPC peering connection will get created and will reference our two VPC ID parameters.  Additionally, we will have routes created and assigned to each route table for each VPC.   The route has the hard coded “Destination CIDR Block” and references the “VPC Peering Connection ID” created above it.  You could very easily strip out the hard coded “Destination CIDR block” and use it as an input parameter as well to give you even more flexibility.

Finally, once this template is run, as long as you put in each of the correct parameters, you will have two VPCs that are connected together via VPC peering.

Image6

Remember to adjust the security groups to allow traffic to pass between the VPCs.  Also remember the rules of VPC peering.  You can see those at  http://docs.aws.amazon.com/AmazonVPC/la/PeeringGuide/vpc-peering-overview.html

Please leave a note if you have any comments/question, or you can contact 2nd Watch for all your CloudFormation needs!

-Derek Baltazar, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

High-performance Teams in DevOps

The secret to DevOps maturity isn’t technology or process, but people. It takes engaged leadership and all-for-one cooperation to achieve the kind of results that lead companies to superior IT performance.

High-performing DevOps teams can recover 168 times faster from failures and have 60 times fewer failures due to changes, according to the 2015 State of DevOps report by Puppet Labs. High-performing teams also release code at significantly increasing velocity as their teams grow in size, approaching three deploys per day per developer, for teams of around 1000 developers. The report concluded that high-performance outcomes depend upon a goal oriented culture, a modular architecture, continuous delivery and effective leadership.

First, let’s define the key traits of a high-performing DevOps team:

  • Business Responsiveness: taking a business idea and moving it to implementation within weeks or less.
  • Reliability: speed means nothing if it results in a broken or underperforming product. DevOps pros are always seeking out the la tools and processes to integrate quality at every step of development and production.
  • Cross-functional prowess: developers, ers and operations managers should be able to perform some of the tasks of their counterparts and understand how all the roles and processes overlap. Developers should consider security when they write code, while operations people should take into account user needs when deploying the code.

How to grow high-performance capabilities
Engaging leadership on the technology and business side is becoming table stakes for DevOps. Without senior level buy-in and support, which may include creating new incentives and investing in training and tools, DevOps is merely an experiment not a wholesale transformation. Leaders can help by developing a culture which allows for experimentation, autonomy, self-organization and the adoption of Agile methods such as Scrum and Kanban. Hiring a DevOps specialist in either a consulting or full-time role can be invaluable to jumpstart the process and provide an objective lens on change. Consider these additional steps as you move along the DevOps maturity curve:

Find the champions: Conduct an assessment of people within the IT or product engineering organization to identify team members who are curious, open to change, adaptable and excited about helping lead others in best practices. Some companies use personality assessments to assign roles and pinpoint developing leaders.

Create a collaboration strategy: High-performance teams collaborate frequently and effectively. But adopting collaborative workflows takes time, especially in more traditional IT shops. Begin by introducing change that is productive while breaking down silos. Examples include paired programming and paired developer-er teams where employees work together or in a leader/observer model. This can enable higher levels of productivity, learning and also model the interactive processes required for continuous delivery.

Allow for self-organizing teams: There are many opinions on how to self- organize, but generally, this entails organizing work and selecting strategies with some level of freedom. Teams can manage day-to-day decisions and workflow as a group, but must communicate frequently and take on ownership and commitment to goals and outcomes. Take note, this doesn’t mean teams have no managers or guidance, but simply that they can be self-directed in the pursuit of objectives. Such freedom has been deemed valuable for innovation and performance, by DevOps experts.

Invest in training: It’s fine to have a DIY approach when you’re first getting started on DevOps, but once your company has committed to the philosophy, you’ll need to invest in education. That encompasses tactical skills for continuous delivery, continuous integration, Agile development and ing, collaborative workflows and cloud orchestration and security, to start. Soft skills training around entrepreneurialism, teamwork and organizational development may also be helpful. Gather input as to where individuals would like to grow, instead of having a top-down approach to training.

Create the technology infrastructure for speed and flexibility: Open the door to new automation, change management and monitoring tools, yet provide frameworks to avoid tool sprawl and silos of data. Wherever and whenever possible, move workloads to the public cloud, where the on-demand, dynamic infrastructure is a perfect fit for enabling DevOps processes and goals.

Give more focus to QA: Given the faster pace of DevOps, ing in software development is more important than ever. Adopt -driven development practices to accelerate quality and speed with a “ as you go” mentality. Automate your harnesses and plans and build as much automated coverage as you can into the process to ensure fast iterations.

Finally, keep it simple. Pilot the creation of a minimally viable product where team members can safely experiment with continuous development and other concepts, beginning a process of continual improvement. Measuring outcomes requires looking beyond standard measures such as critical defects and cycle times to incorporate business friendly KPIs including customer satisfaction and usability.

There is no single path toward developing high-performance teams. Just remember, the heart of DevOps success is people. Learning how to best attract, retain, and motivate top performers is more important than achieving any specific DevOps metric and will serve your company for the long term.

-Kris Bliesner, CTO

This article was originally published 0n Computer Technology Review on 12/17/15.

Facebooktwittergoogle_pluslinkedinmailrss

How to Achieve Redundancy for High Availability in the Cloud

In the last of our four-part blog series with our strategic partner, Alert Logic, we explore business resumption for cloud environments. Check out last week’s article on Free Tools and Tips for Testing the Security of Your Environment Against Attacks first.

Business resumption, also known as disaster recovery, has always been a challenge for organizations. Aside from those in the banking and investment industry, many businesses don’t take business resumption as seriously as they should.

I formerly worked at a financial institution that would send their teams to another city in another state where production data was backed up and could be restored in the event of a disaster. Employees would go to this location and use the systems in production to complete their daily workloads. This would the redundancy of a single site, but what if you could have many redundant sites? What if you could have a global backup option and have redundancy not only when you need it, but as a daily part of your business strategy?

To achieve true redundancy, I recommend understanding your service provider’s offerings. Each service provider has different facilities located in different regions that are spread between different telecom service providers.

From a customer’s perspective, this creates a good opportunity to build out an infrastructure that has fully redundant load balances, giving your business a regional presence in almost every part of the world. In addition, you are able to deliver application speed and efficiency to your regional consumers.

Look closely at your provider’s services like hardware health monitoring, log management, security monitoring and all the management services that accompany those solutions.  If you need to conform to certain compliance regulations, you also need to make sure the services and technologies meet each regulation.

Organize your vendors and managed service providers so that you can get your data centralized based on service across all providers and all layers of the stack. This is when you need to make sure that your partners share data, have the ability to ingest logs, and exchange APIs with each other to effectively secure your environment.

Additionally, centralize the notification process so you are getting one call per incident versus multiple calls across providers. This means that API connectivity or log collection needs to happen between technologies that are correlating triggered events across multiple platforms. This will centralize your notification and increase the efficiency and decrease detection time to mitigate risks introduced into your environment by outside and inside influences.

Lastly, to find incidents as quickly as possible, you need to find a managed services provider that will be able to ingest and correlate all events and logs across all infrastructures. There are also cloud migration services that will help you with all these decisions as they help move you to the cloud.

Learn more about 2W Managed Cloud Security and how our partnership with Alert Logic can ensure your environment’s security

Article contributed by Alert Logic

AlertLogic_Logo_2C_RGB_V_Tag

Facebooktwittergoogle_pluslinkedinmailrss

Top Business Issues When Moving to the Cloud

Jeff Aden, EVP of Business Development and Marketing at 2nd Watch, recently contributed this article on top business issues when moving to the cloud to Data Center Knowledge Enjoy!

When planning a move to the cloud, CIOs often worry about if and how legacy applications will migrate successfully, what level of security and archiving they need, whether they have the right internal skills and which cloud providers and toolsets are the best match for business goals.

There are also a number of business issues to consider, such as the changing nature of contracts and new considerations for budgeting and financial planning. For instance, transitioning from large upfront capital purchases, such as data center investments and software agreements to monthly service fees can help lower costs related to the management and maintenance of technology, much of which is underutilized. There’s also no need to deploy capital on unutilized resources — all positive changes. Yet pay-as-you-go pricing also brings a change in how IT is purchased: the CFO will need processes for monitoring usage and spending, to prevent so-called cloud sprawl.  Here’s our take on the top considerations beyond technology planning for making a smooth move to the cloud.

  1. Working with existing IT outsourcers. A recent survey by Gartner noted that approximately 70% of CIOs surveyed will be changing IT vendor and sourcing relationships over the next two years. The primary reason for this shift is that most traditional IT service providers aren’t delivering public cloud-related services or products that are suited for the transition to a digital business. Dissolving or changing relationships with longtime IT partners will take some thought around how to structure the right business terms. For instance, when renewing contracts with current vendors, companies may seek to add a clause allowing them to bifurcate between current services (hardware/colocation) and emerging services such as cloud. This will allow the right provider with the right skill sets to manage the cloud workloads. If your company is within an existing contract that’s not up for renewal soon, look for another legal out, such as “default” or “negligent” clauses in the contract, which would also allow you to hire a reputable cloud IaaS firm because your current provider does not have the skill set, process for expertise in a new technology. Reputable vendors shouldn’t lock their customers into purchasing services which aren’t competitive in the marketplace. Yet, the contract rules everything, so do whatever you can to ensure flexibility to work with cloud vendors when renewing IT contracts.
  1. Limits of liability. This contractual requirement gives assurances to the customer that the vendor will protect your company, if something goes wrong. Limits of liability are typically calculated on the number of staff people assigned to an account and/or technology investments. For instance, when a company would purchase a data center or enter into a colocation agreement, it required a large CAPEX investment and a large ongoing OPEX cost. For these reasons, the limits of liability would be a factor above this investment and the ongoing maintenance costs. With the cloud, you only pay for what you use which is significantly less but grows overtime. Companies can manage this risk by ensuring escalating limits of liability which are pegged to the level of usage. As your cloud usage grows, so does your protection.
  1. Financial oversight. As mentioned earlier, one advantage of on-premise infrastructure is that the costs are largely stable and predictable. The cloud, which gives companies far more agility to provision IT resources in minutes with a credit card, can run up the bill quickly without somebody keeping a close watch of all the self-service users in your organization. It’s more difficult to predict costs and usage in the cloud, given frequent changes in pricing along with shifts in business strategy that depend upon easy access to cloud infrastructure. Monitoring systems that track activity and usage in real time, across cloud and internal or hosted environments are critical in this regard. As well, finance tools that allow IT and finance to map cloud spending to business units and projects will help analyze spend, measure business return and assist with budget planning for the next quarter or year. Cloud expense management tools should integrate with other IT cost management and asset management tools to deliver a quick view of IT investments at any moment. Another way to control spend is to work with a reseller. An authorized reseller will be able to eliminate credit card usage, providing your company with invoicing and billing services, the ability to track spend and flexible payment terms. This approach can save companies time and money when moving to the cloud.
  1. Service catalogue: A way to manage control while remaining agile is to implement a service catalogue, allowing a company’s security and network teams to sign off on a design that can be leveraged across the organization multiple times with the same consistency every time. Service catalogues control which users have access to which applications or services to enable compliance with business policies while giving employees an easy way to browse available resources. For instance, IT can create a SAP Reference Implementation for a environment.  Once this is created, signed off by all groups, and stored in your service catalogue it can be leveraged the same way, every time and by all approved users.  This provides financial control and governance over how the cloud is being deployed in your organization. It can also move your timelines up considerably, saving you weeks and months from ideation to deployment.
  1. Staffing/organizational changes: With any shift in technology, there is a required shift in staffing and organizational changes. With the cloud, this involves both skills and perspective.  Current technologists whom are subject matter experts, such as SAN engineers, will need to understand business drivers, adopt strategic thinking and have a focus on business-centered innovation.  The cloud brings tools and services that change the paradigm on where and how time is being spent.  Instead of spending 40% of your time planning the next rack of hardware to install, IT professional should focus their energies responding to business needs and providing valuable solutions that were previously cost prohibitive, such as spinning up a data warehouse for less than a $1,000 per year.

To set up a private appointment with a 2nd Watch migration expert, Contact Us.

Facebooktwittergoogle_pluslinkedinmailrss

High Performance Computing in the Public Cloud

The exponential growth of big data is pushing companies to process massive amounts of information as quickly as possible, which is often times not realistic, practical or down right just not achievable on standard CPI’s. In a nutshell, High Performance Computing (HPC) allows you to scale performance to process and report on the data quicker and can be the solution to many of your big data problems.

However, this still relies on your cluster capabilities. By using AWS for your HPC needs, you no longer have to worry about designing and adjusting your job to meet the capabilities of your cluster. Instead, you can quickly design and change your cluster to meet the needs of your jobs.  There are several tools and services available to help you do this, like the AWS Marketplace, AWS API’s, or AWS CloudFormation Templates.

Today, I’d like to focus on one aspect of running an HPC cluster in AWS that people tend to forget about – placement groups.

Placement groups are a logical grouping of instances in a single availability zone.  This allows you to take full advantage of a low-latency 10 GB network, which in turn will allow you to be able to transfer up to 4TB of data per hour between nodes.  However, because of the low-latency 10 GB network, the placement groups cannot span to multiple availability zones.  This may scare some people away from using them, but it shouldn’t. You can create multiple placement groups in different availability zones as a work-around, and with enhanced networking you can also still connect between the different HPC’s.

One of the grea benefits of AWS HPC is that you can run your High Performance Computing clusters with no up-front costs and scale out to hundreds of thousands of cores within minutes to meet your computing needs. Learn more about Big Data and HPC solutions on AWS or Contact Us to get started with a workload workshop.

-Shawn Bliesner, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

Business Intelligence and Analytics in the Public Cloud

Business intelligence (BI) is an umbrella term that refers to a variety of software applications used to analyze an organization’s raw data. BI as a discipline is made up of several related activities including data mining, online analytical processing, querying and reporting.  Analytics is the discovery and communication of meaningful patterns in data. This blog will look at a few areas of BI that will include data mining and reporting, as well as talk about using analytics to find the answers you need to make better business decisions.

Data Mining

Data Mining is an analytic process designed to explore data.  Companies of all sizes continuously collect data, often times in very large amounts, in order to solve complex business problems.  Data collection can range in purpose from finding out the types of soda your customers like to drink to tracking genome patterns. To process these large amounts of data quickly takes a lot of processing power, and therefore, a system such as Amazon Elastic MapReduce (EMR) is often needed to accomplish this.  AWS EMR can handle most use cases from log analysis to bioinformatics, which are key when collecting data, but AWS EMR can only report on data that is collected, so make sure the collected data is accurate and complete.

Reporting

Reporting accurate and complete data is essential for good BI.  Tools like Splunk’s Hunk and Hive work very well with AWS EMR for modeling, reporting, and analyzing data.  Hive is business intelligence software used for reporting meaningful patterns in the data, while Hunk helps interactively review logs with real-time alerts. Using the correct tools is the difference between data no one can use and data that provides meaningful BI.

Why do we collect all this data? To find answers of course! Finding answers in your data, from marketing data to application debugging, is why we collect the data in the first place.  AWS EMR is great for processing all that data with the right tools reporting on that data.  But more than knowing just what happened, we need to find out how it happened.  Interactive queries on the data are required to drill down and find the root causes or customer trends.  Tools like Impala and Tableau work great with AWS EMR for these needs.

Business Intelligence and Analytics boils down to collecting accurate and complete data.  That includes having a system that can process that data, having the ability to report on that data in a meaningful way, and using that data to find answers.  By provisioning the storage, computation and database services you need to collect big data into the cloud, we can help you manage big data, BI and analytics while reducing costs, increasing speed of innovation, and providing high availability and durability so you can focus on making sense of your data and using it to make better business decisions.  Learn more about our BI and Analytics Solutions here.

-Brent Anderson, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss