1-888-317-7920 info@2ndwatch.com

VPC Peering via CloudFormation

Here is the use case:   Imagine you have several VPCs (target VPCs) that you want to attach to a services VPC.  In the services VPC you have an application license server, or a Domain Controller, or any other workload that instances in each of the other target VPCs need access to.  Every time a new target VPC is created you want to have an easy way to connect that VPC to the services VPC.  You also want to use CloudFormation for creating the peer between the target VPCs and the services VPC because it’s easier for you to manage and keep track of the peering connections if they are tied to a CloudFormation stack.

In the picture below, the red arrows represent the target VPCs and the green arrow represents the services VPC.  Here we’ll discuss a template that will connect VPC A (services VPC) to VPC G (a target VPC).  This template, with slight modifications, could be used for attaching any target VPC (B – G) with the services VPC.

Image1

The requirement for a template that would do this would be one that created the following resources;

  • A VPC Peering Connection
  • A VPC route from the services VPC to the target VPC
  • A VPC route from the target VPC to the services VPC.

The two VPC routes listed above could be several routes depending on your routing requirements.  Say you had several route tables – one for public subnets and one for private subnets – you would want to account for all of them in the template to make sure they know the proper route to the services VPC.

In this example, since both VPCs (the target and the service VPC) have already been created, the parameters in your CloudFormation template should account for inputting these values.  Here we have parameters for the services VPC as well as the target VPC.

Image2

Expanding the parameters section we see that the parameter we specified as “ServicesVPC” and “targetVPCtoPeerto” are actually VPC IDs that will be used in the resource section of our template.

Image3

Setting these two values as input parameters will let you use this template to theoretically connect any two VPCs together as long as you can provide the VPC IDs.  If we were to expand the “RouteTableForTargetVPCYouArePeeringto” we would see the parameter is actually a route table ID.

Image4

Since this is a simplified example, we are only peering two VPCs that have a single route table.  If we wanted to peer VPCs that had additional route tables, say you had one for private traffic and one for public traffic, you would just add another parameter for the additional route table and tie that to a corresponding resource in the resource section of the template.  It’s also important to remember that the parameters can be created and declared but don’t necessarily have to be used or referenced in your template.  Additionally, you can see “RouteTableForServicesVPC” is pre-filled because we already are using it with existing peers.

Moving down to the resources section of the template we see that the only resources that need to be created would be the VPC Peering Connection, a VPC route from the services VPC to the target VPC, and a VPC route from the target VPC to the services VPC.

Image5

Expanding the resources, we can see that the VPC peering connection will get created and will reference our two VPC ID parameters.  Additionally, we will have routes created and assigned to each route table for each VPC.   The route has the hard coded “Destination CIDR Block” and references the “VPC Peering Connection ID” created above it.  You could very easily strip out the hard coded “Destination CIDR block” and use it as an input parameter as well to give you even more flexibility.

Finally, once this template is run, as long as you put in each of the correct parameters, you will have two VPCs that are connected together via VPC peering.

Image6

Remember to adjust the security groups to allow traffic to pass between the VPCs.  Also remember the rules of VPC peering.  You can see those at  http://docs.aws.amazon.com/AmazonVPC/la/PeeringGuide/vpc-peering-overview.html

Please leave a note if you have any comments/question, or you can contact 2nd Watch for all your CloudFormation needs!

-Derek Baltazar, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

Understanding the AWS Security Model and Services

Protecting and monitoring networks, applications and data is simple if you know and use the right tools

Security is a stifling fear for organizations considering public clouds, one frequently stoked by IT vendors with vested interests in selling enterprise IT hardware and software using security as a catalyst for overall FUD about cloud services. The fears and misconceptions about cloud security are rooted in unfamiliarity and conjecture. A survey of IT pros with actual cloud experience found the level of security incidents relative to on-premise results quite similar. When asked to compare public cloud versus on-premise security, the difference between those saying the risks are significantly lower versus higher is a mere one percent. Cloud infrastructure is probably more secure than typical enterprise data centers, but cloud users can easily create application vulnerabilities if they don’t understand the available security services and adapt existing processes to the cloud environment.

Cloud-security-survey_int-v-public

Whatever the cause, the data shows that cloud security remains an issue with IT executives. For example, a survey of security professionals found that almost half are very concerned about public cloud security, while a 2014 KPMG survey of global business executives found that security and data privacy are the most important capabilities when evaluating a cloud service and that the most significant cloud implementation challenges center on the risks of data loss, privacy intrusions and intellectual property theft.

KPMG-cloud-2014_security-eval

 

KPMG-cloud-2014_adopt-challenges

Unfortunately, such surveys are fraught with problems since they ask for subjective, comparative evaluation of two very different security models, one (on-premise) that IT pros have years of experience implementing, managing and refining, and the other (public cloud) that is relatively new to enterprise IT, particularly as a production platform, and thus often not well implemented. The ‘problem’ with public cloud security isn’t that it’s worse, no, it’s arguably better. Rather, the problem is that cloud security is different. Public cloud services necessarily use an unfamiliar and more granular security design that accommodates multi-tenant services with many users, from various organizations, mixing and matching services tailored to each one’s specific needs.

AWS Security Model

AWS designs cloud security using a shared security model that bisects security responsibilities, processes and technical implementation between the service provider, i.e. AWS, and customer, namely enterprise IT. In the cloud, IT relinquishes control over low-level infrastructure like data center networks, compute, storage and database implementation and infrastructure management to the cloud provider. The customer, i.e. enterprise IT, has control over abstracted services provided by AWS along with the operating systems, virtual networks, storage containers (object buckets, block stores), applications, data and transactions built upon those services, along with the user and administrator access to those services.

AWS_shared-security-model

The first step to cloud security is mentally relinquishing control: internalizing the fact that AWS (or your IaaS of choice) owns low-level infrastructure and is responsible for securing it, and given their scale and resources is most likely doing better than most enterprise IT organizations. Next, AWS users must understand the various security control points they do have. AWS breaks these down into five categories:

  • Network security: virtual firewalls, network link encryption and VPNs used to build a virtual private cloud (VPC).
  • Inventory and configuration: comprehensive view of AWS resources under use, a catalog of standard configuration templates and machine images (AMIs) and tools for workload deployment and decommissioning.
  • Data encryption: security for stored objects and databases and associated encryption key management.
  • Access control: user identity management (IAM), groups and policies for service access and authentication options including multifactor using one-time passwords.
  • Monitoring and logging: tools like CloudWatch and CloudTrail for tracking service access and use, with ability to aggregate data from all available services into a single pool that feeds comprehensive usage reports, facilitates post-incident forensic analysis and provides real-time application performance alerts (SNS).

Using CloudTrail Activity Logs

Organizations should apply existing IT security policies in each area by focusing first on the objectives, the policy goals and requirements, then mapping these to the available AWS services to create control points in the cloud. For example, comprehensive records of user access and service usage are critical to ensuring policy adherence, identifying security gaps and performing post hoc incident analysis. CloudTrail fills this need acting as something of a stenographer recording all AWS API calls, for every major service, whether accessed programmatically or via the CLI, along with use of the management console. CloudTrail records are written in JSON format to facilitate extraction, filtering and post-processing, including third party log analysis tools like Alert Logic, Loggly and Splunk.

CloudTrail so thoroughly monitors AWS usage that it not only logs changes to other services, but to itself. It records access to logs themselves and can trigger alerts when logs are created or don’t follow established configuration guidelines. For security pros, CloudTrail data is invaluable when used to build reports about abnormal user or application behavior and to detail activity around the time of a particular suspicious event.

The key to AWS security is understanding the division of responsibilities, the cloud control points and available tools. Mastering these can allow cloud-savvy organizations to build security processes that exceed those in many on-site data centers.

-2nd Watch Blog by Kurt Marko

Facebooktwittergoogle_pluslinkedinmailrss

Running your Business Applications on AWS

An oft-held misconception by many individuals and organizations is that AWS is great for Web services, big data processing, DR, and all of the other “Internet facing” applications but not for running your internal business applications.  While AWS is absolutely an excellent fit for the aforementioned purposes, it is also an excellent choice for running the vast majority of business applications.  Everything from email services, to BI applications, to ERP, and even your own internally built applications can be run in AWS with ease while virtually eliminating future IT capex spending.

Laying the foundation
One of the most foundational pieces of architecture for most businesses is the network that applications and services ride upon.  In a traditional model, this will generally look like a varying number of switches in the datacenter that are interconnected with a core switch (e.g. a pair of Cisco Nexus 7000s). Then you have a number of routers and VPN devices (e.g. Cisco ASA 55XX) that interconnect the core datacenter with secondary datacenters and office sites.  This is a gross oversimplification of what really happens on the business’s underlying network (and neglects to mention technologies like Fibre Channel and InfiniBand).  But that further drives the point that migrating to AWS can greatly reduce the complexity and cost of a business in managing a traditional RYO (run your own) datacenter.

Anyone familiar with IT budgeting is more than aware of the massive capex costs associated with continually purchasing new hardware as well as the operational costs associated with managing it – maintenance agreements, salaries of highly skilled engineers, power, leased datacenter and network space, and so forth.  Some of these costs can be mitigated by going to a “hosted” model where you are leasing rack space in someone else’s datacenter, but you are still going to be forking out a wad of cash on a regular basis to support the hosted model.

The AWS VPC (Virtual Private Cloud) is a completely virtual network that allows businesses the ability to create private network spaces within AWS to run all of their applications on, including internal business applications.  Through the VGW (Virtual Private Gateway) the VPC inherently provides a pathway for businesses to interconnect their off-cloud networks with AWS.  This can be done through traditional VPNs or by using the VPC’s Direct Connect.  Direct provides a dedicated private connection from AWS to your off-cloud locations (e.g. on-prem, remote offices, colocation).  The VPC is also flexible enough that it will allow you to run your own VPN gateways on EC2 instances if that is a desired approach.  In addition, interconnecting with most MPLS providers is supported, as long as the MPLS provider hands off VLAN IDs.

Moving up the stack
The prior section showed how the VPC is a low cost and simplified approach to managing network infrastructure. We can proceed up the stack to the server, storage, and application layers.  Another piece of the network layer that is generally heavily intertwined with the application architecture and the server’s hosting is load balancing.  At a minimum, load balancing enables the application to run in a highly available and scalable manner while providing a single namespace/endpoint for the application client to connect.  Amazon’s ELB (Elastic Load Balancer) is a very cost effective, powerful, and easy to use solution to load balancing in AWS.  A lot of businesses have existing load balancing appliances, like F5 BigIP, Citrix Netscaler, or A1, that they use to manage their applications.  Many have also written a plethora of custom rules and configs, like F5 iRules, to do some layer 7 processing and logic on the application.  All of the previously mentioned load balancing solution providers, and quite a few more, have AWS hosted options available, so there is an easy migration path if they decide the ELB is not a good fit for their needs.  However, I have personally written migration tools for our customers to convert well over a thousand F5 Virtual IPs and pools (dumped to a CSV) into ELBs.  It allowed for a quick and scripted migration of the entire infrastructure with an enormous cost savings to the customer.  In addition to off-the-shelf appliances for load balancing, you can also roll your own with tools like HAProxy and Nginx, but we find that for most people the ELB is an excellent solution for meeting their load balancing needs.

Now we have laid the network foundation to run our servers and applications on.  AWS provides several services for this.  If you need, or desire, to manage your own servers and underlying operating system, EC2 (Elastic Compute Cloud) provides the foundational building blocks for spinning up virtual servers you can tailor to suit whatever need you have.  A multitude of Linux and Windows-based Operating Systems are supported.  If your application supports it, there are services like ElasticBeanstalk, OpsWorks, or Lambda, to name a few, that will manage the underlying compute resources for you and simply allow you to “deploy code” on completely managed compute resources in the VPC.

What about my databases?
There are countless examples of people running internal business application databases in AWS.  The RDS (Relational Database Service) provides a comprehensive, robust, and HA capable hosted solution for MySQL, PostgreSQL, Microsoft SQL server, and Oracle.  If your database platform isn’t supported by RDS, you can always run your own DB servers on EC2 instances.

NAS would be nice

AWS has always recommended a very ephemeral approach to application architectures and not storing data directly on an instance.  Sometimes there is no getting away from needing shared storage, though, across multiple instances.  Amazon S3 is a potential solution but is not intended to be used as attached storage, so the application must be capable of addressing and utilizing S3’s endpoints if that is to be a solution.  There are a great many applications that aren’t compatible with that model.

Until recently your options were pretty limited for providing a NAS type of shared storage to Amazon EC2 instances.  You could create a GlusterFS (AKA Redhat Storage Server) or Ceph cluster out of EC2 instances spanned across multiple availability zones, but that is fairly expensive and has several client mounting issues. The Gluster client, for example, is a FUSE (filesystem in user space) client and has sub-optimal performance.  Linux Torvalds has a famous and slightly amusing – depending upon the audience – rant about userspace filesystems (see: https://lkml.org/lkml/2011/6/9/462).  To get around the FUSE problem you could always enable NFS server mode, but that breaks the ability of the client to dynamically connect to another GlusterFS server node if one fails thus introducing a single point of failure.  You could conceivable set up some sort of NFS Server HA cluster using Linux heartbeat, but that is tedious, error prone, and places the burden of the storage ecosystem support on the IT organization, which is not desirable for most IT organizations.  Not to mention that Heartbeat requires a shared static IP address, which could be jury rigged in VPC, but you absolutely cannot share the same IP address across multiple Availability Zones, so you would lose multi-AZ protection.

Yes, there were “solutions” but nothing that was easy and slick like most everything else in AWS is nor anything that is ready for primetime.  Then on April 9th, 2015 Amazon introduced us to EFS (Elastic File System).  The majority of corporate IT AWS users have been clamoring for a shared file system solution in AWS for quite some time, and EFS is set to fill that need.  EFS is a low latency, shared storage solution available to multiple EC2 instances simultaneously via NFSv4.  It is currently in preview mode but should be released to GA in the near future.  See more at https://aws.amazon.com/efs/.

Thinking outside the box
In addition to the AWS tools that are analogs of traditional IT infrastructure (e.g. VPC ≈ Network Layer, EC2 ≈ Physical server or VM) there are a large number of tools and SaaS offerings that add value above and beyond.  Tools like SQS, SWF, SES, RDS – for hosted/managed RDMBS platforms – CloudTrail, CloudWatch, DynamoDB, DirectoryServices, WorkDocs, WorkSpace, and many more make transitioning traditional business applications into the cloud easy, all the while eliminating capex costs, reducing operating costs, and increasing stability and reliability.

A word on architectural best practices
If it is at all possible, there are some guiding principles and best practices that should be followed when designing and implementing solutions in AWS.  First and foremost, design for failure.  The new paradigm in virtualized and cloud computing is that no individual system is sacred and nothing is impervious to potential failure.  Having worked in a wide variety of high tech and IT organizations over the past 20 years, this should really come as no surprise because even when everything is running on highly redundant hardware and networks, equipment and software failures have ALWAYS been prevalent.  IT and software design as a culture would have been much better off adopting this mantra years and years ago.  However, overcoming some of the hurdles designing for failure creates wasn’t a full reality until virtualization and the Cloud were available.

AWS is by far the forerunner in providing services and technologies that allow organizations to decouple the application architecture from the underlying infrastructure.  Tools like Route53, AutoScaling, CloudWatch, SNS, EC2, and configuration management allow you to design a high level of redundancy and automatic recovery into your infrastructure and application architecture.  In addition to designing for failure, decoupling the application state from the architecture as a whole should be strived for.  The application state should not be stored on any individual component in the stack, nor should it be passed around between the layers.  This way the loss of a single component in the chain will not destroy the state of the application.  Having the state of the application store in its own autonomous location, like a distributed NoSQL DB cluster, will allow the application to function without skipping a beat in the event of a component failure.

Finally, a DevOps, Continuous Integration, or Continuous Delivery methodology should be adopted for application development.  This allows changes to be ed automatically before being pushed into production and also provides a high level of business agility.  The same kind of business agility that running in the Cloud is meant to provide.

-Ryan Kennedy, Senior Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

Reevaluate your Virtual Private Cloud (VPC)

VPCWith the New Year comes the resolutions. When the clock struck midnight on January 1st, 2015 many people turned the page on 2014 and made a promise to do an act of self-improvement. Often times it’s eating healthier or going to the gym more regularly. With the New Year, I thought I could put a spin on a typical New Year’s Resolution and make it about AWS.

How could you improve on your AWS environment? Without getting too overzealous, let’s focus on the fundamental AWS network infrastructure, specifically an AWS Virtual Private Cloud (VPC). An AWS VPC is a logically isolated, user controlled, piece of the AWS Cloud where you can launch and use other AWS resources. You can think of it as your own slice of AWS network infrastructure that you can fully customize and tailor to your needs. So let’s talk about VPCs and how you can improve on yours.

  • Make sure you’re using VPCs! The simple act of implementing a VPC can put you way ahead of the game. VPCs provide a ton of customization options from defining your own VPC size via IP addressing; to controlling subnets, route tables, and gateways for controlling network flow between your resources; to even defining fine-grained security using security groups and network ACLs. With VPCs you can control things that simply can’t be done when using EC2-Classic.
  • Are you using multiple Availability Zones (AZs)? An AZ is a distinct isolated location engineered to be inaccessible from failures of other AZs. Make sure you take advantage of using multiple AZs with your VPC. Often time instances are just launched into a VPC with no rhyme or reason. It is great practice to use the low-latency nature and engineered isolation of AZs to facilitate high availability or disaster recovery scenarios.
  • Are you using VPC security groups? “Of course I am.” Are you using network ACLs? “I know they are available, but I don’t use them.” Are you using AWS Identity and Access Management (IAM) to secure access to your VPCs? “Huh, what’s an IAM?!” Don’t fret, most environments don’t take advantage of all the tools available for securing a VPC, however now is the time reevaluate your VPC and see if you can or even should use these security options. Security groups are ingress and egress firewall rules you place on individual AWS resources in your VPC and one of the fundamental building blocks of an environment. Now may be a good time to audit the security groups to make sure you’re using the principle of least privilege, or not allowing any access or rules that are not absolutely needed. Network ACLs work at the subnet level and may be useful in some cases. In larger environments IAM may be a good idea if you want more control of how resources interact with your VPC. In any case there is never a bad time to reevaluate security of your environment, particularly your VPC.
  • Clean up your VPC! One of the most common issues in AWS environments are resources that are not being used. Now may be a good time to audit your VPC and take note of what instances you have out there and make sure you don’t have resources racking up unnecessary charges. It’s a good idea to account for all instances, leftover EBS volumes, and even clean up old AMIs that may be sitting in your account.  There are also things like extra EIPs, security groups, and subnets that can be cleaned up. One great tool to use would be AWS Trusted Advisor. Per the AWS service page, “Trusted Advisor inspects your AWS environment and finds opportunities to save money, improve system performance and reliability, or help close security gaps.”
  • Bring your VPC home. AWS, being a public cloud provider, allows you to create VPCs that are isolated from everything, including your on-premise LAN or datacenter. Because of this isolation all network activity between the user and their VPC happens over the internet. One of the great things about VPCs are the many types of connectivity options they provide. Now is the time to reevelautate how you use VPCs in conjunction with your local LAN environment. Maybe it is time to setup a VPN and turn your environment into a hybrid cloud and physical environment allowing all communication to pass over a private network. You can even take it one step further by incorporating AWS Direct Connect, a service that allows you to establish private connectivity between AWS and your datacenter, office, or colocation environment. This can help reduce your network costs, increase bandwidth throughput, and provide a more consistent overall network experience.

 

These are just a few things you can do when reevaluating your AWS VPC for the New Year. By following these guidelines you can gain efficiencies in your environment you didn’t have before and can rest assured your environment is in the best shape possible for all your new AWS goals of 2015.

-Derek Baltazar, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss