1-888-317-7920 info@2ndwatch.com

2nd Watch Enterprise Cloud Expertise On Display at AWS re:Invent 2017

AWS re:Invent is less than twenty days away and 2nd Watch is proud to be a 2017 Platinum Sponsor for the sixth consecutive year.  As an Amazon Web Services (AWS) Partner Network Premier Consulting Partner, we look forward to attending and demonstrating the strength of our cloud design, migration, and managed services offerings for enterprise organizations at AWS re:Invent 2017 in Las Vegas, Nevada.

About AWS re:Invent

Designed for AWS customers, enthusiasts and even cloud computing newcomers, the nearly week-long conference is a great source of information and education for attendees of all skill levels. AWS re:Invent is THE place to connect, engage, and discuss current AWS products and services via breakout sessions ranging from introductory and advanced to expert as well as to hear the latest news and announcements from key AWS executives, partners, and customers. This year’s agenda offers a full additional day of content for even more learning opportunities, more than 1,000 breakout sessions, an expanded campus, hackathons, boot camps, hands-on labs, workshops, expanded Expo hours, and the always popular Amazonian events featuring broomball, Tatonka Challenge, fitness activities, and the attendee welcome party known as re:Play.

2nd Watch at re:Invent 2017

 2nd Watch has been a Premier Consulting Partner in the AWS Partner Network (APN) since 2012 and was recently named a leader in Gartner’s Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide (March 2017). We hold AWS Competencies in Financial Services, Migration, DevOps, Marketing, and Commerce, Life Sciences and Microsoft Workloads, and have recently completed the AWS Managed Service Provider (MSP) Partner Program Audit for the third year in a row. Over the past decade, 2nd Watch has migrated and managed AWS deployments for companies such as Crate & Barrel, Condé Nast, Lenovo, Motorola, and Yamaha.

The 2nd Watch breakout session—Continuous Compliance on AWS at Scale—will be led by cloud security experts Peter Meister and Lars Cromley. The session will focus on the need for continuous security and compliance in cloud migrations, and attendees will learn how a managed cloud provider can use automation and cloud expertise to successfully control these issues at scale in a constantly changing cloud environment. Registered re:Invent Full Conference Pass holders can add the session to their agendas here.

In addition to our breakout session, 2nd Watch will be showcasing our customers’ successes in the Expo Hall located in the Sands Convention Center (between The Venetian and The Palazzo hotels).  We invite you to stop by booth #1104 where you can explore 2nd Watch’s Managed Cloud Solutions, pick up a coveted 2nd Watch t-shirt and find out how you can win one of our daily contest giveaways—a totally custom 2nd Watch skateboard!

Want to make sure you get time with one of 2nd Watch’s Cloud Journey Masters while at re:Invent?  Plan ahead and schedule a meeting with one of 2nd Watch’s AWS Professional Certified Architects, DevOps, or Engineers.  Last but not least, 2nd Watch will be hosting its annual re:Invent after party on Wednesday, November 29. If you haven’t RSVP’d for THE AWS re:Invent Partner Party, click here to request your invitation.

AWS re:Invent is sure to be a week full of great technical learning, networking, and social opportunities.  We know you will have a packed schedule but look forward to seeing you there!  Be on the lookout for my list of “What to Avoid at re:Invent 2017” in the coming days…it’s sure to help you plan for your trip and get the most out of your AWS re:Invent experience.


–Katie Laas-Ellis, Marketing Manager, 2nd Watch


Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About 2nd Watch

2nd Watch is an AWS Premier tier Partner in the AWS Partner Network (APN) providing managed cloud to enterprises. The company’s subject matter experts, software-enabled services and cutting-edge solutions provide companies with tested, proven, and trusted solutions, allowing them to fully leverage the power of the cloud. 2nd Watch solutions are high performing, robust, increase operational excellence, decrease time to market, accelerate growth and lower risk. Its patent-pending, proprietary tools automate everyday workload management processes for big data analytics, digital marketing, line-of-business and cloud native workloads. 2nd Watch is a new breed of business which helps enterprises design, deploy and manage cloud solutions and monitors business critical workloads 24×7. 2nd Watch has more than 400 enterprise workloads under its management and more than 200,000 instances in its managed public cloud. The venture-backed company is headquartered in Seattle, Washington. To learn more about 2nd Watch, visit www.2ndwatch.com or call 888-317-7920.


Best Practices: Developing a Tag Strategy for AWS Billing

Tag Strategy is key to Cost Allocation for Cloud Applications.

Have you ever been out to dinner with a group of friends and at the end of the dinner the waiter comes back with one bill?  Most of us have experienced this.  Depending on the group of friends it’s not a big deal and everyone drops in a credit card so the bill can be split evenly.  Other times, someone invites Harry and Sally, and they scrutinize the bill line by line.  Inevitably they protest that they only had one glass of wine and Sally only had the salad.  You recall that Sally was  a little ‘handsy’ with the sampler platter, but you sit quietly.  It’s in that moment you remember, that’s why the group didn’t include Harry and Sally to last year’s New Year’s dinner.  No need to start the new year with an audit, am I right?

This situation can be eerily similar in many ways to cloud billing in a large enterprise.  The fact that Amazon Web Services (AWS) has changed the way that an organization uses computing resources is evident.  However, AWS has also delivered on the promise of truly enabling ‘chargeback’ or ‘showback’ in the enterprise so that the business units themselves are stakeholders in what was traditionally silo’d in an IT Department budget.

Now multiple stake holders from many organizations have a stake in the cost and usage of an app that resides in AWS.  Luckily there are tools like 2nd Watch’s Cloud Management Platform (CMP) that can easily provide visibility to the cost of their app, or even what their entire infrastructure is costing them at the click of a button.

2nd Watch’s CMP tools are great for showing an organization’s costs and can even be used to set budget notifications so that the business unit doesn’t inadvertently spend more than is budgeted on an environment.   CMP is a powerful tool that can deliver powerful insights to your business and can be made more powerful by implementing a thorough tagging strategy.

Tag your it…

We live in a world of tags and hashtags.  Seemingly overnight tags have made their way into everyday language.  This is not by accident as cloud interactions with Facebook and Twitter have become so commonplace, they have altered the world’s language.

Beyond their emergence in our everyday vernacular, they have a key function. In AWS, applying tags to various cloud resources like EC2 and RDS is key to having quality accounting for allocating charges.  Our team of experts at 2nd Watch can work with you to ensure that your tagging strategy is implemented in the most effective manner for your organization.  After all, a tagging strategy can and will vary by organization.  It depends on you and how you want to be able to report on your resources.  Do you want to be able to report on your resources used by cost center, application, environment type (like dev or prod), owner, department, geographic area, or if this resource was managed by a managed service provider like 2nd Watch?

Without having a well thought out tagging strategy your invoicing discussions will sound much like the fictional dinner described above.  Who pays for what and why?

Tag Strategy and Hygiene…

Implementing a sound tagging strategy at the outset when a resource or environment is deployed is the first step.  At the inception it’s important to know some “gotchas” that can derail a tagging implementation.  One of these is that tags are case sensitive.  For example, mktg will report separately from Mktg.  Also keep in mind, that in today’s ever changing business environment organizations are forced to adjust and reorganize themselves to stay competitive.

Revisiting your tagged resource strategy will need to be done from time to time to ensure tag relevance.  If a stake holder moves out of a role, gets promoted, or retires from the organization altogether, you will need to stay on top of the tagging for their environment to be sure that it is still relevant to the new organization.

What about the un-taggables?

Having standardization and a tag plan works great for AWS resources like EC2 and RDS as explained before.  What about untaggable resources, Network transfer charges, and items like a NAT gateway or a VPC Endpoint?   There will be shared resources like these in your applications environment. It is best to review these shared untagged resources early on, and decide where to best allocate that cost.

At 2nd Watch, we have these very discussions with our clients on a regular basis. We can easily guide them through the resources associated with the app and where to allocate each cost.  With a tool like CMP we can configure a client’s cost allocation hierarchy so they can view their ongoing costs in real time.

For it’s part, Amazon does a great job providing an up-to-date user guide for what resources can be tagged.  Click here for great reference documentation to help while you develop your tag strategy.

Rinse and repeat as necessary

Your tagging strategy can’t be a ‘fire and forget’ pronouncement.  To be effective your organization will need to enforce it on a consistent basis. For instance, as new devops personnel are brought into an organization, it will be key to insuring it stays under control.

These are the types of discussions that 2nd Watch adds a lot of value to.  Our cloud expertise in AWS for large enterprises will insure that you are able to precisely account for your cloud infrastructure spend at the click of a button through CMP.

After all, we all want to enjoy our meal and move on with the next activity. Stop by and visit us at re:Invent booth #1104 for more help.

— Paul Wells, Account Manager, 2nd Watch


What to Expect at AWS re:Invent 2017

The annual Amazon Web Services (AWS) re:Invent conference is just around the corner (the show kicks off November 27 in Las Vegas). Rest assured, there will be lots of AWS-related products, partners, and customer news. Not to mention, more than a few parties. Here’s what to expect at AWS re:Invent 2017—and a few more topics we hope to hear about.

1.)  Focus on IOT, Machine Learning, and Big Data

IOT, Machine Learning, and Big Data are top of mind with much of the industry—insert your own Mugatu “so hot right now” meme here – and we expect all three to be front and center at this year’s re:Invent conference. These Amazon Web Services are ripe for adoption, as most IT shops lack the capabilities to deploy these types of services on their own.  We expect to see advancements in AWS IOT usability and features. We’ve already seen some early enhancements to AWS Greengrass, most notably support for additional programming languages, and would expect additional progress to be displayed at re:Invent. Other products that we expect to see advancement made are with AWS Athena and AWS Glue.

In the Machine Learning space, we were certainly excited about the recent partnership between Amazon Web Services and Microsoft around Gluon, and expect a number of follow-up announcements geared toward making it easier to adopt ML into one’s applications. As for Big Data, we imagine Amazon Web Service to continue sniping at open source tools that can be used to develop compelling services. We also would be eager to see more use of AWS Lambda for in-flight ETL work, and perhaps a long-running Lambda option for batch jobs.

2.)  Enterprise Security

To say that data security has been a hot topic these past several months, would be a gross understatement. From ransomware to the Experian breach to the unsecured storage of private keys, data security has certainly been in the news. In our September Enterprise Security Survey, 73% of respondents who are IT professionals don’t fully understand the public cloud shared responsibility model.

Last month, we announced our collaboration with Palo Alto Networks to help enterprises realize the business and technical benefits of securely moving to the public cloud. The 2nd Watch Enterprise Cloud Security Service blends 2nd Watch’s Amazon Web Services expertise and architectural guidance with Palo Alto Networks’ industry-leading VM series of security products. To learn more about security and compliance, join our re:Invent breakout session—Continuous Compliance on AWS at Scale— by registering for ID number SID313 from the AWS re:Invent Session Catalogue. The combination delivers a proven enterprise cloud security offering that is designed to protect customer organizations from cyberattacks, in hybrid or cloud architectures. 2nd Watch is recognized as the first public cloud-native managed security provider to join the Palo Alto Networks, NextWave Channel Partner Program. We are truly excited about this new service and collaboration, and hope you will visit our booth (#1104) or Palo Alto Networks (#2409) to learn more.

As for Amazon Web Services, we fully expect to see a raft of announcements. Consistent with our expectations around ML and Big Data, we expect to hear about enhanced ML-based anomaly detection, logging and log analytics, and the like. We also expect to see advancements to AWS Shield and AWS Organizations, which were both announced at last year’s show. Similarly, we wouldn’t be surprised by announced functionality to their web app firewall, AWS WAF. A few things we know customers would like are easier, less labor-intensive management and even greater integration into SecDevOps workflows. Additionally, customers are looking for better integration with third-party and in-house security technologies – especially   application scanning and SIEM solutions – for a more cohesive security monitoring, analysis, and compliance workflow.

The dynamic nature of the cloud creates specific challenges for security. Better security and visibility for ephemeral resources such as containers, and especially for AWS Lambda, are a particular challenge, and we would be extremely surprised not to see some announcements in this area.

Lastly, General Data Protection Regulations (GDPR) will be kicking in soon, and it is critical that companies get on top of this. We expect Amazon Web Service to make several announcements about improved, secure storage and access, especially with respect to data sovereignty. More broadly, we expect that Amazon Web Service will announce improved tools and services around compliance and governance, particularly with respect to mapping deployed or planned infrastructure against the control matrices of various regulatory schemes.

3.)  Parties!

We don’t need to tell you that AWS’ re:Play Party is always an amazing, veritable visual, and auditory playground.  Last year, we played classic Street Fighter II while listening to Martin Garrix bring the house down (Coin might have gotten ROFLSTOMPED playing Ken, but it was worth it!).  Amazon Web Services always pulls out all the stops, and we expect this year to be the best yet.

2nd Watch will be hosting its annual party for customers at the Rockhouse at the Palazzo.  There will be great food, an open bar, an awesome DJ, and of course, a mechanical bull. If you’re not yet on the guest list, request your invitation TODAY! We’d love to connect with you, and it’s a party you will not want to miss.

Bonus: A wish list of things 2nd Watch would like to see released at AWS re:Invent 2017

Blockchain – Considering the growing popularity of blockchain technologies, we wouldn’t be surprised if Amazon Web Service launched a Blockchain as a Service (BaaS) offering, or at least signaled their intent to do so, especially since Azure already has a BaaS offering.

Multi-region Database Option – This is something that would be wildly popular but is incredibly hard to accomplish. Having an active-active database strategy across regions is critical for production workloads that operate nationwide and require high uptime.  Azure already offers it with their Cosmos DB (think of it as a synchronized, multi-region DynamoDB), and we doubt Amazon Web Service will let that challenge stand much longer. It is highly likely that Amazon Web Service has this pattern operating internally, and customer demand is how Amazon Web Service services are born.

Nifi – The industry interest in Nifi data-flow orchestration, often analogized to the way parcel services move and track packages, has been accelerating for many reasons, including its applicability to IoT and for its powerful capabilities around provenance. We would love to see AWS DataPipeline re-released as Nifi, but with all the usual Amazon Web Services provider integrations built in.

If even half our expectations for this year’s re:Invent are met, you can easily see why the 2nd Watch team is truly excited about what Amazon Web Services has in store for everyone. We are just as excited about what we have to offer to our customers, and so we hope to see you there!

Schedule a meeting with one of our AWS Professional Certified Architects, DevOps or Engineers and don’t forget to come visit us in booth #1104 in the Expo Hall!  See you at re:Invent 2017!


— Coin Graham, Senior Cloud Consultant and John Lawler, Senior Product Manager, 2nd Watch


Migrating to Terraform v0.10.x

When it comes to managing cloud-based resources, it’s hard to find a better tool than Hashicorp’s Terraform. Terraform is an ‘infrastructure as code’ application, marrying configuration files with backing APIs to provide a nearly seamless layer over your various cloud environments. It allows you to declaratively define your environments and their resources through a process that is structured, controlled, and collaborative.

One key advantage Terraform provides over other tools (like AWS CloudFormation) is having a rapid development and release cycle fueled by the open source community. This has some major benefits: features and bug fixes are readily available, new products from resource providers are quickly incorporated, and you’re able to submit your own changes to fulfill your own requirements.

Hashicorp recently released v0.10.0 of Terraform, introducing some fundamental changes in the application’s architecture and functionality. We’ll review the three most notable of these changes and how to incorporate them into your existing Terraform projects when migrating to Terraform v.0.10.x.

  1. Terraform Providers are no longer distributed as part of the main Terraform distribution
  2. New auto-approve flag for terraform apply
  3. Existing terraform env commands replaced by terraform workspace

A brief note on Terraform versions:

Even though Terraform uses a style of semantic versioning, their ‘minor’ versions should be treated as ‘major’ versions.

1. Terraform Providers are no longer distributed as part of the main Terraform distribution

The biggest change in this version is the removal of provider code from the core Terraform application.

Terraform Providers are responsible for understanding API interactions and exposing resources for a particular platform (AWS, Azure, etc). They know how to initialize and call their applications or CLIs, handle authentication and errors, and convert HCL into the appropriate underlying API calls.

It was a logical move to split the providers out into their own distributions. The core Terraform application can now add features and release bug fixes at a faster pace, new providers can be added without affecting the existing core application, and new features can be incorporated and released to existing providers without as much effort. Having split providers also allows you to update your provider distribution and access new resources without necessarily needing to update Terraform itself. One downside of this change is that you have to keep up to date with features, issues, and releases of more projects.

The provider repos can be accessed via the Terraform Providers organization in GitHub. For example, the AWS provider can be found here.

Custom Providers

An extremely valuable side-effect of having separate Terraform Providers is the ability to create your own, custom providers. A custom provider allows you to specify new or modified attributes for existing resources in existing providers, add new or unsupported resources in existing providers, or generate your own resources for your own platform or application.

You can find more information on creating a custom provider from the Terraform Provider Plugin documentation.

1.1 Configuration

The nicest part of this change is that it doesn’t really require any additional modifications to your existing Terraform code if you were already using a Provider block.

If you don’t already have a provider block defined, you can find their configurations from the Terraform Providers documentation.

You simply need to call the terraform init command before you can perform any other action. If you fail to do so, you’ll receive an error informing you of the required actions (img 1a).

After successfully reinitializing your project, you will be provided with the list of providers that were installed as well as the versions requested (img 1b).

You’ll notice that Terraform suggests versions for the providers we are using – this is because we did not specify any specific versions of our providers in code. Since providers are now independently released entities, we have to tell Terraform what code it should download and use to run our project.

(Image 1a: Notice of required reinitialization)










(Image 1b: Response from successful reinitialization)










Providers are released separately from Terraform itself, and maintain their own version numbers.

You can specify the version(s) you want to target in your existing provider blocks by adding the version property (code block 1). These versions should follow the semantic versioning specification (similar to node’s package.json or python’s requirements.txt).

For production use, it is recommended to limit the acceptable provider versions to ensure that new versions with breaking changes are not automatically installed.

(Code Block 1: Provider Config)

provider "aws" {
  version = "0.1.4"
  allowed_account_ids = ["1234567890"]
  region = "us-west-2"

 (Image 1c: Currently defined provider configuration)










2. New auto-approve flag for terraform apply

In previous versions, running terraform apply would immediately apply any changes between your project and saved state.

Your normal workflow would likely be:
run terraform plan followed by terraform apply and hope nothing changed in between.

This version introduced a new auto-approve flag which will control the behavior of terraform apply.

Deprecation Notice

This flag is set to true to maintain backwards compatibility, but will quickly change to false in the near future.

2.1 auto-approve=true (current default)

When set to true, terraform apply will work like it has in previous versions.

If you want to maintain this functionality, you should upgrade your scripts, build systems, etc now as this default value will change in a future Terraform release.

(Code Block 2: Apply with default behavior)

# Apply changes immediately without plan file
terraform apply --auto-approve=true

2.2 auto-approve=false

When set to false, Terraform will present the user with the execution plan and pause for interactive confirmation (img 2a).

If the user provides any response other than yes, terraform will exit without applying any changes.

If the user confirms the execution plan with a yes response, Terraform will then apply the planned changes (and only those changes).

If you are trying to automate your Terraform scripts, you might want to consider producing a plan file for review, then providing explicit approval to apply the changes from the plan file.

(Code Block 3: Apply plan with explicit approval)

# Create Plan
terraform plan -out=tfplan

# Apply approved plan
terraform apply tfplan --auto-approve=true

(Image 2a: Terraform apply with execution plan)








3. Existing terraform env commands replaced by terraform workspace

The terraform env family of commands were replaced with terraform workspace to help alleviate some confusion in functionality. Workspaces are very useful, and can do much more than just split up environment state (which they aren’t necessarily used for). I recommend checking them out and seeing if they can improve your projects.

There is not much to do here other than switch the command invocation, but the previous commands still currently work for now (but are deprecated).


License Warning

You are using an UNLICENSED copy of Scroll Office.

Do you find Scroll Office useful?
Consider purchasing it today: https://www.k15t.com/software/scroll-office


— Steve Byerly, Principal SDE (IV), Cloud, 2nd Watch


Current IoT Security Threat Landscape

By Paul Fletcher, Alert Logic

The “Internet of Things” (IoT) is a broadly accepted term which basically describes any Internet-connected devices (usually via Wi-Fi) that isn’t a traditional computer system.  These connected, IoT devices offer many conveniences for everyday life.  Also, it’s difficult to remember how life was before you could check email, weather and stream live video using a smart TV.  It’s now considered commonplace for a smart refrigerator to send you a text every morning with an updated shopping list.  We can monitor and manage the lights, thermostat, doors, locks and web cameras from wherever we may roam, thanks to smartphone apps and the proliferation of our connected devices.

With this added convenience comes a larger digital footprint, which makes for a larger target for attackers to discover other systems on your network, steal data or seize control of your DVR.  The hacker community is just getting warmed up in regards to attacking IoT devices.  There are a lot of fun things hackers can do with vulnerable connected devices and/or “smart” homes.  The early attacks were just about exploring, hackers would simulate ghosts by having all the lights in the house go on and off in a pattern, turn the heater on during the summer and the air conditioning in the winter or make the food inside the fridge go bad with the change of a few temperature levels.

The current IoT security threat landscape has grown more sophisticated recently and we’ve seen some significant attacks.  The most impactful IoT-based cyber attack happened on Oct. 21, 2016, when a hacker group activated 10% of their IoTBotNet, with malware called “Mirai.”  Approximately 50,000 web cameras and DVR systems launched a massive DDoS attack on the Dyn DNS Service, disrupting Internet services for companies like Spotify, Twitter, Github and others for more than 8 hours.  The attackers only used 10% of the 500,000 DVR’s and Web Camera’s infected by the malware, but cause monetary damage to customers of the Dyn DNS service.  A few months later, attackers launched a new IoT-specific malware called “Persirai” that infected over 100,000 web cameras.  This new malware comes complete with a sleek detection avoidance feature.  Once the malware executes on the web cam it only runs in the RAM memory space and deletes the original infection file, making it extremely difficult to detect.

The plain, cold truth is that most IoT manufacturers use stripped down versions of the Linux (and possibly Android) operating system, because the OS requires minimal system resources to operate.  ALL IoT devices have some version of an operating system and are therefore; “lightweight” computers.  Since most IoT devices are running some form of Linux or Android operating system, this means that they have vulnerabilities that are researched and discovered on an on-going basis.  So, yes, it’s possible that you may have to install a security patch for your refrigerator or coffee maker.

Special-purpose computer systems with customized versions of operating systems have been around for decades.  The best example of this is old school arcade games or early gaming consoles.  The difference today is that these devices now come with fast, easy connectivity to your internal network and the Internet.  Most IoT manufacturers don’t protect the underlying operating system on their “smart” devices and consumers shouldn’t assume it’s safe to connect a new device to their network.  Both Mirai and Persirai compromised IoT devices using simple methods like default usernames and passwords.  Some manufacturers feel like their devices are so “lightweight” that their limited computing resources (hard drive, RAM etc.) wouldn’t be worth hacking, because they wouldn’t provide much firepower for an attacker.  The hacking community repeatedly prove that they are interested in ANY resource (regardless of capacity) they can leverage.

When an IoT device is first connected to your network (either home or office), it will usually try to “call home” for software updates and/or security patches.  It’s highly recommended that all IoT devices be placed on an isolated network segment and blocked from the enterprise or high valued home computer systems.  It’s also recommended to monitor all outbound Internet traffic from your “IoT” network segment to discern a baseline of “normal” behavior.  This helps you better understand the network traffic generated from your IoT devices and any “abnormal” behavior could help discover a potential attack.

Remember “hackers gonna hack,” meaning the threat is 24/7. IoT devices need good computer security hygiene, just like your laptop, smartphone and tablet.  Make sure you use unique and easily remembered passwords and make sure to rotate all passwords regularly.  Confirm that all of your systems are using the la patches and upgrades for better functionality and security.  After patches are applied, validate your security settings haven’t been changed back to the default settings.

IoT devices are very convenient and manufacturers are getting better at security, but with the ever-changing IoT threat landscape we can expect to see more impactful and sophisticated attack in the near future.  The daily burden of relevant operational security for an organization or household is no easy task and IoT devices are just one of the many threats that require on-going monitoring.  It’s highly recommended that IoT cyber threats be incorporated into a defense in depth strategy as a holistic approach to cyber security.

Learn more about 2nd Watch Managed Cloud Security and how our partnership with Alert Logic can ensure your environment’s security.

Blog Contributed by 2nd Watch Cloud Security Partner, Alert Logic