1-888-317-7920 info@2ndwatch.com

Assessing your cloud environment and maintaining a baseline security posture

The cloud is an exciting place, where the realization of instant implementation and execution of enterprise applications and workloads at scale is achieved. As we move to utilize the cloud we tend to utilize already existing applications and operating systems that are baked into the cloud service provider’s frame. Baseline images and cloud virtual machines are readily available, and we quickly utilize these images to put our infrastructure in motion in the cloud. This presents some challenges in terms of security, as we tend to deploy first and secure second, which is not ideal from a proactive security stance.

Continuous deployment is becoming the normality. We are literally updating and changing our systems many times a day now in the cloud. New features become available and new ISV applications and services roll out on a continuous basis, and we leverage them rapidly.

To maintain a healthy cloud state for our infrastructure and applications, we need to focus our attention on managing towards a baseline configuration state and desired state configuration for our systems. We need to monitor, maintain and continuously be aware of our systems’ state of health and state of configuration at any given time.

Clients are evolving so quickly in the cloud that they end up  playing catchup on the security side of their cloud environment.  One way to support a proactive level of security for your cloud environment is to perform continuous vulnerability, assessment and remediation activities on a regular basis to ensure all areas of your cloud environment meet a baseline of security.

The cloud service providers are doing a lot of heavy lifting to support you to achieve a proactive security architecture, and they provide many different layers of tools and technologies to help you achieve this.

The first area to focus on is your machine library. Maintaining a secure baseline image scope across the catalog of your infrastructure and applications will ensure you can track and maintain those image states per role of operation. This helps to ensure that when the cataloged healthy state image takes an un-approved change, you become instantly aware of it and can roll back to a healthy image state, undo any compromises to image integrity and return to a secure and compliant operating state.

AWS has a great tool that helps us maintain configuration state management. AWS Config allows you to audit, evaluate and assess configuration baselines of your cloud resources continuously. It provides strong monitoring of your AWS resource configurations and support evaluating the state against your secure baseline and ensuring your desired state is maintain and recovered. It gives you the ability to quickly review changes to configuration state and how that state changes across the broad set of resources that make up your infrastructure or application.

It provides deep insight and details around resources, resource historical records and configuration and supports strong capabilities around compliance, specifically for your organization and specified resource guidelines.

Utilizing tools such as AWS Config provides you a strong first step in the path to achieving continuous security baseline protection and managing against vulnerabilities and unwarranted changes to your resources in production. Outside of AWS Config, AWS provides a strong suite of tools and technologies to help us achieve a more comprehensive security baseline. Check out Amazon System Manager, which supports a common agent topology across Windows and Linux with the SSM agent to support clients scanning and maintaining resource health and information continuously – not to mention it’s a great overall agent to maintain core IT operations management in AWS. We can maintain strong operational health of our cloud resources in AWS with Amazon CloudWatch.

One recent addition for security is Amazon GuardDuty, which provides intelligent threat detection and continuously monitors to protect your AWS accounts and workloads. Amazon GuardDuty is directly accessible through the amazon console and, with a few clicks, can be fully implemented to help support your workload protection proactively.

If you need assistance in achieving strong continuous security for your AWS cloud environment, 2nd Watch offers services to help our clients realize the security posture within their cloud environment today. We can also help create and customize a vulnerability, assessment and remediation strategy that will have long standing benefits in continuously achieving strong security defense in depth in the cloud.

-Peter Meister, Sr Director of Product Management

Facebooktwittergoogle_pluslinkedinmailrss

Governance, Risk and Compliance – Drive Change Across the Organization

Governance, Risk and Compliance (GRC) is a standard framework that helps to drive organizations towards a common set of goals and principals. The overarching theme is strategically focused on how technology utilization and operations tie directly back to an organization’s business goals and, in many cases, aspirations.

There are many facets to GRC. In the cloud it means the same thing as it did in the datacenter. We need to ensure IT organizes around the business, and we need to make sure risk is minimized and compliance is maintained.

At 2nd Watch we work with clients across all areas of GRC. Clients take various levels of focus in each area, and some areas are more important based on the vertical the client is operating in.

The cloud extends beyond the physical bounds of an organization, and with that institutes new challenges and requires a shared cloud responsibility model. The CSP is responsible for the underlying infrastructure setup and physical maintenance of their cloud infrastructure. We work with our cloud ISV and providers’ tools, technologies and best practices to help maintain strong governance and lower risk while meeting compliance.

The landscape of software, tools and solutions to support governance, risk and compliance is daunting in the cloud marketplace. 2nd Watch focuses on providing a holistic support to our clients around GRC. We believe there are fantastic capabilities directly inside the cloud management portals to help customers along the journey to strong GRC framework and institution.

In Microsoft Azure we can utilize Compliance Manager. Compliance Manager is a workflow-based assessment tool that enables organizations to track, assign and verify regulatory compliance procedures and activities in support of Microsoft Cloud technologies – including Office 365 and Dynamics. It supports ISO 27001, IS0 27018 and NIST and supports regulatory compliance around HIPAA and GDPR.  It is a foundational tool to utilize within Microsoft Azure to help you along the path to achieving strong governance, risk and compliance around Microsoft Cloud technologies.

With Amazon Web Services we have a complete set of core cloud operations management tools to utilize within the AWS console to help us bolster governance and security and reduce risk. Amazon provides resources with a full prescriptive set of compliance quick reference guides, which provide an overview of how to maintain a cloud compliant environment through strong security and controls validation, and insight and monitoring for activity and security assurance.

Amazon has a complete Cloud Compliance Center where clients can tap into an abundant set of resources to help along the way.

Beyond the tools, both Microsoft Azure and AWS provide strategic support with partners around compliance. There are many accelerators and programs that organizations can request from and Amazon and Microsoft to help them achieve and maintain GRC specifically tuned to the cloud.

GRC is unique to each organization. Cloud providers bring a substantial set of resources and technologies, along with great prescriptive guidance and best practices to help and guide you in achieving a strategic GRC framework and set of processes and procedures in your organization.

Take advantage of these built-in capabilities as you start to look at other tools and technologies to complete your holistic approach to governance, risk and compliance, and please reach out to 2nd Watch to find out how we can support you along the way.

-Peter Meister, Sr Director of Product Management

Facebooktwittergoogle_pluslinkedinmailrss

How to Use AWS IAM with STS for access to AWS resources

With increased focus on security and governance in today’s digital economy, I want to highlight a simple but important use case that demonstrates how to use AWS Identity and Access Management (IAM) with Security Token Service (STS) to give trusted AWS accounts access to resources that you control and manage.

Security Token Service is an extension of IAM and is one of several web services offered by AWS that does not incur any costs to use.  But, unlike IAM, there is no user interface on the AWS console to manage and interact with STS. Rather all interaction is done entirely through one of several extensive SDKs or directly using common HTTP protocol.  I will be using Terraform to create some simple resources in my sandbox account and .NET Core SDK to demonstrate how to interact with STS.

The main purpose and function of STS is to issue temporary security credentials for AWS resources to trusted and authenticated entities.  These credentials operate identically to the long-term keys that typical IAM users have, with a couple of special characteristics:

  • They automatically expire and become unusable after a short and defined period of time elapses
  • They are issued dynamically

These characteristics offer several advantages in terms of application security and development and are useful for cross-account delegation and access.  STS solves two problems for owners of AWS resources:

  • Meets the IAM best-practices requirement to regularly rotate access keys
  • You do not need to distribute access keys to external entities or store them within an application

One common scenario where STS is useful involves sharing resources between AWS accounts.  Let’s say, for example, that your organization captures and processes data in S3, and one of your clients would like to push large amounts of data from resources in their AWS account to an S3 bucket in your account in an automated and secure fashion.

While you could create an IAM user for your client, your corporate data policy requires that you rotate access keys on a regular basis, and this introduces challenges for automated processes.  Additionally, you would like to limit the distribution of access keys to your resources to external entities.  Let’s use STS to solve this!

To get started, let’s create some resources in your AWS cloud.  Do you even Terraform, bro?

Let’s create a new S3 bucket and set the bucket ACL to be private, meaning nobody but the bucket owner (that’s you!) has access.  Remember that bucket names must be unique across all existing buckets, and they should comply with DNS naming conventions.  Here is the Terraform HCL syntax to do this:

Great! We now have a bucket… but for now, only the owner can access it.  This is a good start from a security perspective (i.e. “least permissive” access).

What an empty bucket may look like

Let’s create an IAM role that, once assumed, will allow IAM users with access to this role to have permissions to put objects into our bucket.  Roles are a secure way to grant trusted entities access to your resources.  You can think about roles in terms of a jacket that an IAM user can wear for a short period of time, and while wearing this jacket, the user has privileges that they wouldn’t normally have when they aren’t wearing it.  Kind of like a bright yellow Event Staff windbreaker!

For this role, we will specify that users from our client’s AWS account are the only ones that can wear the jacket. This is done by including the client’s AWS account ID in the Principal statement.  AWS Account IDs are not considered to be secret, so your client can share this with you without compromising their security.  If you don’t have a client but still want to try this stuff out, put your own AWS account ID here instead.


Great, now we have a role that our trusted client can wear.  But, right now our client can’t do anything except wear the jacket.  Let’s give the jacket some special powers, such that anyone wearing it can put objects into our S3 bucket.  We will do this by creating a security policy for this role.  This policy will specify what exactly can be done to S3 buckets that it is attached to. Then we will attach it to the bucket we want our client to use.  Here is the Terraform syntax to accomplish this:

A couple things to note about this snippet – First, we are using Terraform interpolation to inject values from previous terraform statements into a couple of places in the policy – specifically the ARN from the role and bucket we created previously.  Second, we are specifying a condition for the s3 policy – one that requires a specific object ACL for the action s3:PutObject, which is accomplished by including the HTTP request header x-amz-acl to have a value of bucket-owner-full-control with the PUT object request.  By default, objects PUT in S3 are owned by the account that created them, even if it is stored in someone else’s bucket.  For our scenario, this condition will require your client to explicitly grant ownership of objects placed in your bucket to you, otherwise the PUT request will fail.

So, now we have a bucket, a policy in place on our bucket, and a role that assumes that policy.  Now your client needs to get to work writing some code that will allow them to assume the role (wear the jacket) and start putting objects into your bucket.  Your client will need to know a couple of things from you before they get started:

  1. The bucket name and the region it was created in (the example above created a bucket named d4h2123b9-xaccount-bucket in us-west-2)
  2. The ARN for the role (Terraform can output this for you). It will look something like this but will have your actual AWS Account ID: arn:aws:iam::123456789012:role/sts-delegate-role

They will also need to create an IAM User in their account and attach a policy allowing the user to assume roles via STS.  The policy will look similar to this:


Let’s help your client out a bit and provide some C# code snippets for .NET Core 2.0 (available for Windows, macOS and LinuxTo get started, install the .NET SDK for your OS, then fire up a command prompt in a favorite directory and run these commands:

The first command will create a new console app in the subdirectory s3cli.  Then switch context to that directory and import the AWS SDK for .NET Core, and then add packages for SecurityToken and S3 services.
Once you have the libraries in place, fire up your favorite IDE or text editor (I use Visual Studio Code), then open Program.cs and add some code:

This snippet sends a request to STS for temporary credentials using the specified ARN.  Note that the client must provide IAM user credentials to call STS, and that IAM user must have a policy applied that allows it to assume a role from STS.

This next snippet takes the STS credentials, bucket name, and region name, and then uploads the Program.cs file that you’re editing and assigns it a random key/name.  Also note that it explicitly applies the Canned ACL that is required by the sts-delegate-role:

So, to put this all together, run this code block and make the magic happen!  Of course, you will have to define and provide proper variable values for your environment, including  securely storing your credentials.

Try it out from the command prompt:

If all goes well, you will have a copy of Program.cs in the bucket. Not very useful itself, but it illustrates how to accomplish the task.

What a bucket with something in it may look like

Here is a high-level document of what we put together:

Putting it all together

Steps:

  1. Your client uses their IAM user to call AWS STS and requests the role ARN you gave them
  2. STS authenticates the client’s IAM user and verifies the policy for the ARN role, then issues a temporary credential to the client.
  3. The client can use the temporary credentials to access your S3 bucket (they will expire soon), and since they are now wearing the Event Staff jacket, they can successfully PUT stuff in your bucket!

There are many other use-cases for STS. This is just one very simplistic example. However, with this brief introduction to the concepts, you should now have a decent idea of how STS works with IAM roles and policies, and how you can use STS to give access to your AWS resources for trusted entities. For more tips like this, contact us.

-Jonathan Eropkin, Cloud Consultant

Facebooktwittergoogle_pluslinkedinmailrss

Utilizing Amazon Systems Manager to ensure systems are securely configured and maintained

Compliance is a constant challenge today. Keeping our system images in a healthy and trusted state of compliance requires time and effort. There are millions of tools and technologies in market to help customers maintain compliance and state, so where do I start?

Amazon has built a rich set of core technologies within the Amazon Web Services console. Systems Manager is a fantastic operations management platform tool that can assist you with setting up and maintaining configuration and state management.

One of the first things we must focus on when we build out our core images in the cloud is the configuration of those images. What is the role of the image, what operating system am I going to utilize and what applications and/or core services do I need to enable, configure and maintain?  In the datacenter, we call these Gold Images. The same applies in the cloud.

We define these roles for our images and we place them in different functional areas – Infrastructure, Web Services, Applications. We may have many core image templates for our enterprise workloads, by building these base images and maintaining them continuously – we set in motion a solid foundation for core security and core compliance of our cloud environment.

Amazon Systems Manager looks across my cloud environment and allows me to bring together all the key information around all my operating resources in the cloud. It allows me to centralize the gathering of all core baseline information for my resources in one place. In the past I would have had to look at my AWS CloudWatch information in one area, my AWS CloudTrail information in another area and my configuration information in yet another area. Centralizing this information in one console allows you to see the holistic state of your cloud environment baselines in one console.

AWS Systems Manager provides built-in Insight and Dashboards that allow you to look across your entire cloud environment and see into and act upon your cloud resources. AWS Systems Manager allows you to see the configuration compliance of all your resources as well as the state management and associations across your resources. It provides a rich ability to customize configuration and state management for your workloads, applications and resource types and scan and analyze to ensure those configuration and states are maintained continuously. With AWS Systems Manager you can customize and create your own compliance types to marry to your Enterprise Organizational baseline of your company’s business requirements. With that in place, I can constantly scan and analyze against these compliance baselines to ensure and maintain the operational configuration and state always.

We analyze and report on the current state and quickly determine compliance or out of compliance state centrally for our cloud services and resources. We can create base reports around our compliance position at any time, and with this knowledge, we can set in motion remediation to return our services and resources back to a compliant state and configuration.

With Amazon Systems Manager we can scan all resources for patch state, determine what patches are missing and manually, scheduled or automate the remediation of those patches to maintain patch management compliance.

Amazon Systems Manager also integrates with Chef InSpec, allowing you to leverage Chef InSpec to operate in a continuous compliance framework for your cloud resources.

On the road to compliance it is important to flex the tools and capabilities of your Cloud Provider. Amazon gives us a rich set of Systems Management capabilities across configuration, state management, patch management and remediation, as well as reporting. Amazon Systems Manager is provided at no cost to Amazon customers and will help you along your Journey to realizing continuous compliance of your cloud environment across the Amazon Cloud and the Hybrid Cloud. To learn more about using Amazon Systems Manager or your systems’ compliance, contact us.

-Peter Meister, Sr Director of Product Management

Facebooktwittergoogle_pluslinkedinmailrss

What are the Greater Risks of Cloud Computing?

 

There have been countless numbers of articles, blogs and whitepapers written on the subject of security in the cloud and an even greater number of opinions as to the number of risks associated with a move to the same.  Five, seven, ten, twenty-seven?  How many risks are associated with you or your company’s move to the cloud?  Well, in the best consultant-speak, it depends.

One could say that it depends on how far “up the stack” you’re moving.  If, for instance, you are moving from an essentially stand-alone, self-administrated environment to a cloud-based presence, you most likely will be in for the security-based shock of your life.  On the other hand, if you, in the corporate sense, are moving a large, multi-national corporation to the cloud, chances are you’ve already encountered many of the challenges, such as regional compliance and legal issues, which will also be present in your move to the cloud.

The differentiator?  There are three; scale, complexity and speed.  In the hundreds of clients we have helped migrate to the cloud, not once have we come across a security issue that was unique to the cloud.  This is why the title of this article is “What are the Greater Risks of Cloud Computing?” and not “What are the Unique Risks of Cloud Computing?”  There simply aren’t any.  Let’s be clear – this isn’t to say any of these risks aren’t real. They simply aren’t unique, nor are they new.  It is just a case of a new bandwagon (the cloud) with a new crew of sensationalists ready to jump on that bandwagon.

Let’s take a few of the most popularly-stated “risks of cloud computing” and see how this plays out.

Shared Technology

This often makes the list as though it is a unique problem to the cloud.  What about companies utilizing colo’s?  And before that, what about companies using time shared systems – can you say payroll systems?  Didn’t they pre-date the cloud by some decades?  While there might not have been hypervisors or shared applications back in the day, there just as surely could have been shared components at some level, possibly network components or monitoring.

Loss of Data/Data Breaches

In looking at some of the most widely touted data breaches – Target, Ashley Madison, Office of Personnel Management and Anthem to name just a few – the compromises were listed as “result of access to its network via an HVAC contractor monitoring store climate systems,” “unknown,” “contractor’s stolen credentials to plant a malware backdoor in the network,” and “possible watering hole attack that yielded a compromised administrator password.”  Your first thought might be, “Do these hacks even involve the cloud?”  It’s not clear where the data was stored in these instances, but that doesn’t stop articles from being written about the dangers of the cloud and including references to the instances.  Conversely, there is an excellent article in Business Insurance on the very opposite viewpoint.  Perhaps the cloud can be a bit safer that traditional environments for one very good reason – reputation. We have seen customers move to the cloud in order to modernize their security paradigm.  The end result is a more secure environment in the cloud than they ever had on premise.

Account or Service Traffic Hijacking

Now we have a security issue that really makes use of the cloud in terms of scale and speed.  Let’s clarify what we’re talking about here.  This is the hacking of a cloud provider and actually taking over instances for the use of command and control for the purpose of using them as botnets.  The hijacking of compute resources, whether they be personal computers, corporate or cloud resources, continues to this day.

Hacking a cloud provider follows the simple logic of robbing a bank vs. a taco stand in more ways than one.  Where there’s increased reward, there’s increased risk, to turn an old saying around a bit.  If you’re going to hit a lot of resources and make it worth your while, the cloud is the place to go.  However, know that it’s going to be a lot harder and that a lot more eyes are going to be on you and looking for you.  Interestingly, the most recent sightings of this type of activity seem to about the 2009-’10 timeframes as Amazon, Microsoft, Google and the other providers learned quickly from their mistakes.

If you were to continue down the list of other cloud security issues – malicious insiders, inadequate security controls, DDoS attacks, compromised credentials, and the list goes on – it becomes pretty evident that there simply aren’t any out there that are unique.  We’ve seen them before in one context or another, but they just haven’t been as big an issue in our environment.

The next time you see an article on the dangers of the cloud, stop for a moment and think, “Is this truly a problem that has never been seen before or just one that I’ve never encountered or had to deal with before?”

-Scott Turvey, Solutions Architect

Facebooktwittergoogle_pluslinkedinmailrss

SCOR Velogica Moves to AWS for Better Security, SOC2

While some large enterprises avoid moving to the cloud because of rigid security and compliance requirements, SCOR opted for the cloud for a key block of its business precisely because of the cloud’s rigid security and compliance offerings.

SCOR is a leader in the life reinsurance market in the Americas, offering broad capabilities in risk management, capital management and value-added services and solutions. A number of primary insurers use SCOR’s automated life underwriting system, Velogica, to market life insurance policies that can be delivered at the point of sale. Other companies use Velogica as a triage tool for their fully underwritten business.

“Through the Velogica system, we get thousands of life insurance applications a day from multiple clients,” explains Dave Dorans, Senior Vice President.  “Velogica is a significant part of our value proposition and is important to the future of our business.”

Data security has always been a priority for SCOR but the issue became even more critical as data breaches at some of the largest and most respected companies made headline news. SCOR decided to invest in a state of the art data security framework for Velogica.  “We wanted clients to have full confidence in the way Velogica stores and handles the sensitive personal data of individuals,” Dorans said.

SCOR’s goal was to have Velogica accredited as a Service Organization Control (SOC) 2 organization – a competitive advantage in the marketplace – by aligning with one of the more respected information security standards in the industry.  Determining what it would take to achieve that goal became the responsibility of Clarke Rodgers, Chief Information Security Officer with SCOR Velogica. “We quickly determined that SOC2 accreditation for SCOR’s traditional, on premise data center environment would be a monumental task, could cost millions of dollars and perhaps take years to complete.  Moreover, while SOC2 made sense for Velogica, it wasn’t necessary for other SCOR businesses.

Once it was determined that SOC2 was business critical for the company, Rodgers, analyzed the different ways of obtaining the security and compliance measure and determined that moving to the cloud was the most efficient path. SCOR Velogica turned to 2nd Watch to help it achieve SOC2 accreditation with AWS, figuring it would be easier than making the journey on its own.

On working with 2nd Watch, Rodgers commented, ““They came in and quickly understood our technical infrastructure and how to replicate it in AWS, which is a huge feat.” SCOR met significant benefits thanks to the migration, including:

Adherence to specific security needs: In addition to its SOC2 accreditation, 2nd Watch also implemented several security elements in the new AWS environment including; encryption at rest in Amazon Elastic Block Store (EBS) volumes leveraging the AWS Key Management System (KMS), Amazon Virtual Private Cloud (VPC) to establish a private network within AWS, security groups tuned for least privilege access, Security-Enhanced Linux, and AWS Identity and Access Management (IAM) Multi-Factor Authentication (MFA).

AWS optimization: 2nd Watch has helped SCOR identify opportunities for optimization and efficiencies on AWS, which will help down the road if the company wishes to expand the AWS-hosted application to regions outside of North America.  “With our SOC2 Type 1 behind us, we are now focused on optimizing our resources in the AWS Cloud so we can fully exploit AWS’s capabilities to our security and business benefit.” Rodgers explains. “We will rely on 2nd Watch for guidance and assistance during this optimization phase.”

Cost savings on AWS: Rodgers hasn’t done a full analysis yet of cost savings from running the infrastructure on AWS, but he’s confident the migration will eventually cut up to 30% off the price of hosting and supporting Velogica internally.

Hear from SCOR how it achieved better security with AWS on our live webinar April 7. Register Now

Facebooktwittergoogle_pluslinkedinmailrss