1-888-317-7920 info@2ndwatch.com

Increasing Your Cloud Footprint

The jump to the cloud can be a scary proposition.  For an enterprise with systems deeply embedded in traditional infrastructure like back office computer rooms and datacenters the move to the cloud can be daunting. The thought of having all of your data in someone else’s hands can make some IT admins cringe.  However, once you start looking into cloud technologies you start seeing some of the great benefits, especially with providers like Amazon Web Services (AWS).  The cloud can be cost-effective, elastic and scalable, flexible, and secure.  That same IT admin cringing at the thought of their data in someone else’s hands may finally realize that AWS is a bit more secure than a computer rack sitting under an employee’s desk in a remote office.  Once the decision is finally made to “try out” the cloud, the planning phase can begin.

Most of the time the biggest question is, “How do we start with the cloud?”  The answer is to use a phased approach.  By picking applications and workloads that are less mission critical, you can try the newest cloud technologies with less risk.  When deciding which workloads to move, you should ask yourself the following questions; Is there a business need for moving this workload to the cloud?  Is the technology a natural fit for the cloud?  What impact will this have on the business? If all those questions are suitably answered, your workloads will be successful in the cloud.

One great place to start is with archiving and backups.  These types of workloads are important, but the data you’re dealing with is likely just a copy of data you already have, so it is considerably less risky.  The easiest way to start with archives and backups is to try out S3 and Glacier.  Many of today’s backup utilities you may already be using, like Symantec Netbackup  and Veeam Backup & Replication, have cloud versions that can directly backup to AWS. This allows you to use start using the cloud without changing much of your embedded backup processes.  By moving less critical workloads you are taking the first steps in increasing your cloud footprint.

Now that you have moved your backups to AWS using S3 and Glacier, what’s next?  The next logical step would be to try some of the other services AWS offers.  Another workload that can often be moved to the cloud is Disaster Recovery.   DR is an area that will allow you to more AWS services like VPC, EC2, EBS, RDS, Route53 and ELBs.  DR is a perfect way to increase your cloud footprint because it will allow you to construct your current environment, which you should already be very familiar with, in the cloud.  A Pilot Light DR solution is one type of DR solution commonly seen in AWS.  In the Pilot Light scenario the DR site has minimal systems and resources with the core elements already configured to enable rapid recovery once a disaster happens.  To build a Pilot Light DR solution you would create the AWS network infrastructure (VPC), deploy the core AWS building blocks needed for the minimal Pilot Light configuration (EC2, EBS, RDS, and ELBs), and determine the process for recovery (Route53).  When it is time for recovery all the other components can be quickly provisioned to give you a fully working environment. By moving DR to the cloud you’ve increased your cloud footprint even more and are on your way to cloud domination!

The next logical step is to move Test and Dev environments into the cloud. Here you can get creative with the way you use the AWS technologies.  When building systems on AWS make sure to follow the Architecting Best Practices: Designing for failure means nothing will fail, decouple your components, take advantage of elasticity, build security into every layer, think parallel, and don’t fear constraints! Start with proof-of-concept (POC) to the development environment, and use AWS reference architecture to aid in the learning and planning process.  Next your legacy application in the new environment and migrate data.  The POC is not complete until you validate that it works and performance is to your expectations.  Once you get to this point, you can reevaluate the build and optimize it to exact specifications needed. Finally, you’re one step closer to deploying actual production workloads to the cloud!

Production workloads are obviously the most important, but with the phased approach you’ve taken to increase your cloud footprint, it’s not that far of a jump from the other workloads you now have running in AWS.   Some of the important things to remember to be successful with AWS include being aware of the rapid pace of the technology (this includes improved services and price drops), that security is your responsibility as well as Amazon’s, and that there isn’t a one-size-fits-all solution.  Lastly, all workloads you implement in the cloud should still have stringent security and comprehensive monitoring as you would on any of your on-premises systems.

Overall, a phased approach is a great way to start using AWS.  Start with simple services and traditional workloads that have a natural fit for AWS (e.g. backups and archiving).  Next, start to explore other AWS services by building out environments that are familiar to you (e.g. DR). Finally, experiment with POCs and the entire gambit of AWS to benefit for more efficient production operations.  Like many new technologies it takes time for adoption. By increasing your cloud footprint over time you can set expectations for cloud technologies in your enterprise and make it a more comfortable proposition for all.

-Derek Baltazar, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss

AWS Identity and Access Management (IAM)

Dealing with organizational change is a challenge in today’s fast-paced business environment.  Long gone are the days when employees stayed with companies until retirement.  The mindset of many employees is to move around to different companies for a promotion, a better salary, or new challenging opportunities.  Managing organizational change in terms of user access is becoming more and more complex due to the changing technology landscape.  With systems being accessible over the network, IT shops can’t just deny ex-employees physical access to the building, but have to cut their credentials to the network as well. With the proliferation of cloud technologies this can become even more of a challenge because your digital assets are accessible over the internet from anywhere in the world. In many technology centric companies managing login credentials and access are paramount for securing the assets of the business and coping with organizational change.

IAM 1To solve this problem AWS has a service called Identity and Access Management (IAM).  IAM is an AWS feature that allows you to regulate use and access to AWS resources.  With IAM you can create and manage users and groups for access to your AWS environment.  IAM also gives you the ability to assign permissions to the users and groups to allow or deny access.  With IAM you can assign users access keys, passwords and even Multi Factor Authentication devices to access your AWS environment.  IAM on AWS even allows you to manage access with federated users, a way to configure access using credentials that expire and are manageable through traditional corporate directories like Microsoft Active Directory.

With IAM you can set permissions based on AWS provided policy templates like “Administrator Access” which allows full access to all AWS resources and services, “Power User Access” which provides full access to all AWS resources and services but does not allow access to managing users and groups, or even “Read Only Access”.  These policies can be applied to users and groups.  Some policy templates provided can limit users to use certain services like the policy template, “Amazon EC2 Full Access” or “Amazon EC2 Read Only Access”, which gives a user full access to EC2 via the AWS management console and read only access to EC2 via the AWS management console respectively.

User Permissions

IAM also allows you to set your own policies to manage permissions.  Say you wanted an employee to be able to just start and stop instances you can use the IAM Policy Generator to create a custom policy to do just that.  You would select the effect, Allow or Deny, the specific service, and the action.  IAM also gives you the ability to layer the permissions on top of each other by adding additional statements to the policy.

Edit Permissions

Once you create a policy you can apply it to any user or group and it automatically takes effect.  When something changes in the organization, like an employee leaving, AWS IAM simplifies management of access and identity by allowing you to just delete the user or policy attached to that user. If an employee moves from one group to another it is easy to reassign the user to a different group with the appropriate access level.  As you can see the variety of policy rules is extensive, allowing you to create very fine grained permissions around your AWS resources and services.

Another great thing about IAM is that it’s a free service that comes with every AWS account, it is surprising to see how many people overlook this powerful tool.  It is highly recommended to always use IAM with any AWS account.  It gives you the ability to have an organized way to manage users and access to your AWS account and simplifies the management nightmare of maintaining access controls as the environment grows.

-Derek Baltazar

Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss

Cloud Security: AWS

There are four main reasons why companies are moving to the cloud. They include: agility, availability, cost and security. When meeting with the CIO of a prominent movie studio in LA earlier this week he said, “The primary area that we need to understand is security. Our CEO does not want any critical information leaving or being stored offsite.” While the CEO’s concern is valid, cloud providers like Amazon Web Services (AWS) are taking extraordinary measures to ensure both privacy and security on their platform. Below is an overview of the measures taken by AWS.

  • Accreditations and Certifications      – AWS has created a compliance program to help customers understand the      substantial practices in place for both data protection and security to      meet either government or industry requirements. For example, PCI DSS      Level 1, ITAR, etc. for government and HIPAA, MPAA, etc. for industry.
  • Data Protection and Privacy – AWS      adheres to the stric data protection and privacy standards and      regulations, including  FISMA, Sarbanes-Oxley, etc. AWS datacenter      employees are given limited access to the location of customer systems on      an as-needed basis. Discs are also shredded and never re-used by another      customer.
  • Physical Security – Infrastructure      is located in nondescript AWS-controlled datacenters. The location of and      access into each datacenter is limited to employees with legitimate      business reasons (access is revoked when the business reason ends).      Physical access is strictly controlled at both the perimeter and      building ingress points.
  • Secure Services – AWS      infrastructure services are designed and managed in accordance with      security best practices, as well as multiple security compliance      standards. Infrastructure services contain multiple capabilities that      restrict unauthorized access or usage without sacrificing the flexibility      that customers demand.
  • Shared Responsibility – A shared      responsibility exists for compliance and security on the AWS cloud. AWS      owns facilities, infrastructure (compute, network and storage), physical      security and the virtualization layer. The customer owns applications,      firewalls, network configuration, operating system and security groups.

The AWS cloud provides customers with end-to-end privacy and security via its collaboration with validated experts like NASA, industry best practices and its own experience building and managing global datacenters. AWS documents how to leverage these capabilities for customers. To illustrate: I recently met with a VP of Infrastructure for a $1B+ SaaS company in San Francisco. He said, “We are moving more workloads to AWS because it is so secure.” The people, process and technology are in place to achieve the highest level of physical and virtual privacy and security.

-Josh Lowry, General Manager-West

Facebooktwittergoogle_pluslinkedinmailrss

How secure is secure?

There have been numerous articles, blogs, and whitepapers about the security of the Cloud as a business solution.  Amazon Web Services has a site devoted to extolling their security virtues and there are several sites that devote themselves entirely to the ins and outs of AWS security.  So rather than try to tell you about each and every security feature of AWS and try to convince you how secure the environment can be, my goal is to share a real world example of security that can be improved by moving from on premise datacenters to AWS.

Many AWS implementations are used for hosting web applications, most of which are Internet accessible.  Obviously, if your environment is for internal use only you can lock down security even further, but for the interest of this exercise, we’re assuming Internet facing web applications.  The inherent risk, of course, with any Internet accessible application is that accessibility to the Internet provides hackers and malicious users access to your environment as well as honest yet malware/virus/Trojan infected users.

As with on premise and colocation based web farms, AWS offers the standard security practices of isolating customers from one another so that if one customer experiences a security breach, all other customers remain secure.  And of course, AWS Security Groups function like traditional firewalls, allowing traffic only through allowed ports to/from specific destinations/sources.  AWS moves ahead of traditional datacenters starting with Security Groups and Network ACL’s by offering more flexibility to respond to attacks.  Consider the case of a web farm that has components suspected of being compromised; AWS Security Groups can be created in seconds to isolate the suspected components from the rest of the network.  In a traditional datacenter environment, those components may require making complex network changes to move them to isolated networks in order to prevent infection to spread over the network, something AWS blocks by default.

AWS often talks about scalability – able to grow and shrink the environment to meet demands.  That capability also extends to security features as well!  Need another firewall, just add another Security Group, no need to install another device.  Adding another subnet, VPN, firewall, all of these things are done in minutes with no action from on premise staff required.  No more waiting while network cables are moved, hardware is installed or devices are physically reconfigured when you need security updates.

Finally, no matter how secure an environment, no security plan is complete without a remediation plan.  AWS has tools that provide remediation with little to no downtime.  Part of standard practices for AWS environments is to take regular snapshots of EC2 instances (servers).  These snapshots can be used to re-create a compromised or non-functional component in minutes rather than the lengthy restore process for a traditional server.  Additionally, 2nd Watch recommends taking an initial image of each component so that in the event of a failure, there is a fall back point to a known good configuration.

So how secure is secure?  With the ability to respond faster, scale as necessary, and recover in minutes – the Amazon Cloud is pretty darn secure!  And of course, this is only the tip of the iceberg for AWS Cloud Security, more to follow the rest of December here on our blog and please check out the official link above for Amazon’s Security Center and Whitepapers.

-Keith Homewood, Cloud Architect

Facebooktwittergoogle_pluslinkedinmailrss

What is CloudTrail?

Amazon Web Services™ (AWS) released a new service at re:invent a few weeks ago that will have operations and security managers smiling.  CloudTrail is a web service that records AWS API calls and stores the logs in S3.  This provides organizations the visibility they need to their AWS infrastructure to maintain proper governance of changes to their environment.

2nd Watch was pleased to announce support for CloudTrail in our launch of our 2W Atlas product.  2W Atlas is a product that organizes and visualizes AWS resources and output data.  Enterprise organizations need tools and services built for the cloud to properly manage these new architectures.  2W Atlas provides organizations with a tool that enables their divisions and business units to organize and manage the CloudTrail data for their individual group.

2nd Watch is committed to assisting enterprise organizations with the expertise and tools to make the cloud work for them.  The tight integration 2nd Watch has developed with CloudTrail and Atlas is further proof of our expertise in bringing enterprise solutions that our customers demand.

To learn more about 2W Atlas or CloudTrail, Contact Us and let us know how we can help.

-Matt Whitney, Sales Executive

Facebooktwittergoogle_pluslinkedinmailrss

Operational Framework

Cloud Services are becoming more commonplace in the Enterprise. As a result, our ability to learn from past successes and failures becomes vital for the effective launch of new IT strategies. One way to ensure this is applied in practice is through the development of an Operational Framework. An Operational Framework is a key element of cloud strategy that needs to be developed, reviewed, and executed to ensure that organizational goals are achieved and lessons applied.

The items outlined in an Operational Framework provide guidance to Organizations and help them to create, operate, and support IT Services while ensuring that their investments in IT delivers business value at acceptable levels of risk*.

Operational Frameworks can be broken down into two separate categories: IT Operations and Business Operations. IT Operations consist of Security, Fault Tolerance, and Performance. As you develop your operational user guide within your organization, you want to think about these things in parallel.

Security Operations are about taking the time to do things right the first time. Security Groups need to be addressed so that ports are not left open. CIDR configurations need to be examined along with IAM use for the account. S3 Bucket Permissions need to be reviewed so that you don’t have potential data loss. IAM password policies need to be implemented and RDS Security Group access restricted to combat against potential vulnerabilities. Organizations can eliminate many potential headaches by examining these aspects early in your strategy.

Similarly, Fault Tolerance reviews can help you increase availability and redundancy within your environments. By taking advantage of health checks, you can increase your uptime and take advantage of the full benefits of cloud technology. Within this part of your Operational Framework, you should review that you are:

  • Snapshotting your EBS Volumes frequently enough
  • Architecting your environment to take advantage of Multiple Availability Zones or even multiple regions.
  • Taking advantage of Elastic Load Balancers and Auto-Scaling Groups and that they are configured in an optimal way that allows for peak traffic flow and performance.
  • Reviewing your VPN tunnel redundancy so that is it configured ideally.

Fault tolerance is vital to an organization’s IT Operations and should be reviewed often and in detail.

Lastly, within IT Operations, review your performance matrix within your cloud environment. Cloud deployments offer you the ability to take advantage of a suite of powerful services, but often we see that customers will unintentionally over or under prevision their environment, leading to waste. By improving performance of your services and using just what you need, you can greatly maximize your operational budget

For Performance Operations, you should:

  • Review your EC2 Instances, making sure you are not over/under provisioned.
  • Review your service limits, so that when your auto scaling groups do kick in you can do so to meet your demand. Provisioned IOPS are commonly misunderstood and overestimated.
  • Review your EBS configurations and make sure that you are utilizing your PIOPS accordingly.

Other things to consider within this group are:

  • Your DNS provider
  • Using Glacier for Archiving
  • Utilization of yearly 3rd party audits. Having a second set of eyes audit your environment can usually pay itself off after a few months.

Business Operations and Corporate Governance are a bit easier because they focus strictly on utilization. Most importantly, you want make sure that you optimize your use of Reserved Instances. By developing a proper strategy for reservations, you will not only save money but will guarantee that the resources your environment needs are available, even in the event of an outage. Over & under utilization are equally detrimental to your bottom line. Plan to review your usage quarterly and take advantage of billing software as needed to help tighten your understanding of your environment. ELBs, EBS volumes, unused elastic IPs, and idle RDS instances should also be examined, as waste can occur easily with these services as well. Within your business operations framework communication should flow freely between your IT Department, User Groups, and the Finance Department. The free flow of information will allow for future innovation, increased budget parameters, and a unified corporate direction that everyone can agree with.

By taking a few of these simple steps, you are setting yourself up for a successful cloud strategy and implementation for years to come.

*Source: paraphrased internet website

-Blake Diers, Sales Executive

Facebooktwittergoogle_pluslinkedinmailrss