1-888-317-7920 info@2ndwatch.com

Seattle, We Have a Problem

Sometimes stories that explode in the media fade just as quickly – tempests in a teapot.  But this week’s revelation about two critical flaws in nearly every processor made in the last 20 years is most assuredly not a tempest in a teapot. The tech community will be assessing the implications of these vulnerabilities, dubbed Meltdown and Spectre, for the foreseeable future. And this is especially true for the cloud community.

Most modern CPU, including those from Intel, AMD, and ARM, increase performance through a technique called “speculative execution.” Flaws in processor hardware allow Meltdown and Spectre to take advantage of this technique to access privileged memory — including kernel memory — from a less-privileged user process. There are any number of excellent technical write-ups, including https://arstechnica.com/gadgets/2018/01/meltdown-and-spectre-heres-what-intel-apple-microsoft-others-are-doing-about-it/, with more detail. In short, Meltdown breaks the isolation between the application and the operating system, while Spectre breaks the isolation between applications. Both hardware flaws allow malicious programs to steal data that is being processed in computer memory, including sensitive or secret information such as credentials, cryptographic keys, data being processed by any running program, or opened files.

Of the two vulnerabilities, Meltdown is the more immediate threat with proof-of-concept exploits already available. However, Spectre is much deeper and harder to mitigate, potentially leading to ongoing, subtle exploits for years to come. Worse yet, these hardware flaws can be exploited on any modern operating system including Windows, Linux, macOS, containerization solutions such as Docker, and even some classes of hypervisors.

Much of the press has concentrated on the impact to personal and mobile devices – PCs, tablets, smartphones – but cloud environments, whose very foundation is based on resource isolation, are especially impacted. Since the cloud industry is centered in the Puget Sound, we might say “Seattle, we have a problem.”

Because of the critical nature of these vulnerabilities, cloud providers such as Amazon, Microsoft, and Google have already updated their systems. While most mitigation efforts revolve around operating system patches, both AWS and Azure have addressed the problem at the hypervisor level. Both CSPs contend that performance has not been meaningfully impacted, which, if true, is in welcome contrast to initial estimates of performance hits of up to 30%. More information can be found at https://azure.microsoft.com/en-us/blog/securing-azure-customers-from-cpu-vulnerability/ and https://aws.amazon.com/security/security-bulletins/AWS-2018-013/.

Even with hypervisor-centric fixes, it is still critical to update the operating systems running on instances, and thereby improve these operating systems’ abilities to isolate software running within the same instance. All the major CSPs have already installed patches so that all new instances will have the latest version, but existing instances must still be updated. Please note that all AWS instances running Lambda functions have already been patched and no action is required.

If you are a 2nd Watch Managed Cloud customer whose service plan includes patch management, please contact your Technical Account Manager to discuss patch availability and scheduling.  These patches are considered high priority. If you are not currently in a service tier in which 2nd Watch manages patching on your behalf, it is urgent that you patch all your operating systems as soon as possible. If you need assistance in doing so, or if you would like to learn more about how we can proactively manage these issues for you, please contact us.

-John Lawler, Senior Product Manager

Facebooktwittergoogle_pluslinkedinmailrss

AWS re:Invent 2017 Session: Continuous Compliance on AWS at Scale (VIDEO)

In cloud migrations, the elastic nature of the cloud is often touted as a critical capability in delivering on a business’ key initiatives.  However, if not accounted for in your Security and Compliance plans, you could be facing some real challenges. Always counting on a virtual host to be running, for example, will cause issues when that host is rebooted or retired. This is why managing Security and Compliance in the cloud is a continuous action requiring both forethought and automation.

At AWS re:Invent 2017, 2nd Watch hosted a breakout session titled “Continuous Compliance on AWS at Scale” where attendees learned how a leading, next generation, Managed Cloud Provider uses automation and cloud expertise to successfully manage Security and Compliance at scale in an ever-changing environment. This journey starts with account creation, goes through deployment of infrastructure and code and never ends.

Through code examples and live demos, presenters Peter Meister and Lars Cromley demonstrated the tools and automation you can use to provide continuous compliance of your cloud infrastructure from inception to ongoing management.  In case you missed the session or simply wish to get a refresher on the content that was presented, you can now view the breakout session recording below.

 

 

 

 

 

 

 

 

— Katie Laas, Marketing Manager, 2nd Watch

 

Facebooktwittergoogle_pluslinkedinmailrss

AWS re:Invent 2017 Recap and Initial Impressions

While AWS re:Invent 2017 is still fresh in our minds, here are some of the highlights of the most significant announcements.

Aurora Multi-Master/Multi-Region: This is a big deal! The concept of geographically distributed databases with multiple masters has been a long-desired solution. Why is this important?
Having additional masters allows for database writes, not just reads like the traditional read replicas that have been available. This feature enables a true multi-region, highly available solution that eliminates a single point of failure and achieves optimum performance. Previously, third party tools like Golden Gate and various log shipping approaches were required to accomplish proper disaster recovery and high availability. This will greatly simplify architectures for some that want to go active-active across regions and not just availability zones. Additionally, it will enable pilot light (and more advanced) DR scenarios for customers that are not going to be using active-active configurations.

Aurora Serverless: Aurora Serverless is an on-demand, auto-scaling configuration for the Aurora MySQL and PostgresSQL compatible database service, where the database will automatically start-up and scale up or down based on your application’s capacity needs. It will shut down when required, basically scaling down to zero when not being used. Traditionally, Aurora RDS required changing the underlying instance type to scale for database demand. This is a large benefit and cost saver for development, testing, and QA environments. Even more importantly, if your workload has large spikes in demand, then auto-scaling is a game changer in the same way that EC2 auto scaling enabled automated compute flexibility.

T2 Unlimited: T2 is one of the most popular instance types used by 2nd Watch and AWS customers, accounting for around 50% of all instances under 2nd Watch Managed Cloud Services. In the case of frequent, small and inconsistent workloads, T2 is the best price and performance option. However, one of the most common reasons that customers do not heavily leverage T2 is due to concerns related to a sustained spike in load that will deplete burstable credits and result in unrecoverable performance degradation. T2 unlimited solves this problem by essentially allowing unlimited surges over the former limits. We expect to see more customers will adopt T2 for those inconsistent workloads as a cost-effective solution. We will watch to see if this this shift is reflected in the instance type data for accounts being managed by 2nd Watch.

Spot Capacity: Spot instances are normally used as pools of compute that run standard AMIs and work on datasets located outside of EC2. This is because the instances are terminated when the spot price increases beyond your bid, and all data is lost. Now, when AWS reclaims the capacity, the instance can essentially hibernate, preserving the operating system and data, and startup again when the spot pricing is favorable. This removes another impediment in the use of spot capacity, and will be a large cost saver for environments that only need to be temporarily available.

M5 Instance Type: Given the large increase in performance of the newer processor generations, one can see large cost savings and performance improvements by migrating to a smaller sized offering of the latest instance type that meets your application’s needs. Newer instance types can also offer higher network bandwidth as well, so don’t put off the adoption of the latest products if possible.

Inter-region Peering: It’s always been possible to establish peering relationships between VPCs in the same region. Inter-region Peering uses AWS private links between VPCs in different availability zones and does not transit the open internet, eliminating VPNs, etc. This same feature is available inter-region. This makes multi-region designs cleaner and easier to implement, without having to build and configure VPN networking infrastructure to support it, which of course also needs monitoring, patching, and other maintenance. It was also announced that users of Direct Connect can now route traffic to almost every AWS region from a single Direct Connect circuit.

There were also some announcements that we found interesting but need to digest a little longer. Look for a follow up from us on these.

EKS: Elastic Container Services for Kubernetes – Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes clusters. Even at last years’ AWS re:Invent we heard people wondering where the support for Kubernetes was, particularly since it has become the de facto industry standard over the past several years.

GuardDuty: AWS has now added a cloud-native tool to the security toolbox. This tool utilizes “machine learning” for anomaly detection. AWS GuardDuty monitors traffic flow and API logs for your accounts, letting you establish a baseline for “normal” behavior on your infrastructure, and then watches for security anomalies. These are reported with a severity rating, and remediation for certain types of events can be automated using existing AWS tools. We will be considering the best methods of implementation of this new tool.

Fargate: Run Amazon EKS and ECS without having to manage servers or clusters.

Finally, a shameless plug: If compliance is on your mind, watch this AWS re:Invent breakout session from our product and engineering experts.

AWS re:invent 2017: Continuous Compliance on AWS at Scale (SID313)

Speakers:
Peter Meister, Director of Product Management, 2nd Watch
Lars Cromley, Director of Engineering, 2nd Watch

In cloud migrations, the cloud’s elastic nature is often touted as a critical capability in delivering on key business initiatives. However, you must account for it in your security and compliance plans or face some real challenges. Always counting on a virtual host to be running, for example, causes issues when that host is rebooted or retired. Managing security and compliance in the cloud is continuous, requiring forethought and automation. Learn how a leading, next generation managed cloud provider uses automation and cloud expertise to manage security and compliance at scale in an ever-changing environment. Through code examples and live demos, we show tools and automation to provide continuous compliance of your cloud infrastructure.
Obviously, there was a lot more going on and it will take some time to go through it. We will keep you up to date with our thoughts.

–David Nettles, Solutions Architect, 2nd Watch
–Kevin Dillon, Director, Solutions Architecture, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

Well-Architected Framework Reviews

“Whatever you do in life, surround yourself with smart people who argue with you.” – John Wooden

Many AWS customers and practitioners have leveraged the Well-Architected Framework methodology in building new applications or migrating existing applications. Once a build or migration is complete, how many companies implement Well-Architected Framework reviews and perform those reviews regularly? We have found that many companies today do not conduct regular Well Architected Framework reviews and as a result, potentially face a multitude of risks.

What is a Well-Architected Framework?

The Well-Architected Framework is a methodology designed to provide high-level guidance on best practices when using AWS products and services. Whether building new or migrating existing workloads, security, reliability, performance, cost optimization, and operational excellence are vital to the integrity of the workload and can even be critical to the success of the company. A review of your architecture is especially critical when the rate of innovation of new products and services are being created and implemented by Cloud Service Providers (CSP).

2nd Watch Well-Architected Framework Reviews

At 2nd Watch, we provide  Well-Architected Framework reviews for our existing and prospective clients. The review process allows customers to make informed decisions about architecture decisions, the potential impact those decisions have on their business, and tradeoffs they are making. 2nd Watch offers its clients free Well-Architected Framework reviews—conducted on a regular basis—for mission-critical workloads that could have a negative business impact upon failure.

Examples of issues we have uncovered and remediated through Well-Architected Reviews:

  • Security: Not protecting data in transit and at rest through encryption
  • Cost: Low utilization and inability to map cost to business units
  • Reliability: Single points of failure where recovery processes have not been tested
  • Performance: A lack of benchmarking or proactive selection of services and sizing
  • Operations: Not tracking changes to configuration management on your workload

Using a standard based methodology, 2nd Watch will work closely with your team to thoroughly review the workload and will produce a detailed report outlining actionable items, timeframes, as well as provide prescriptive guidance in each of the key architectural pillars.

In reviewing your workload and architecture, 2nd Watch will identify areas of improvement, along with a detailed report of our findings. A separate paid engagement will be available to clients and prospects who want our AWS Certified Solutions Architects and AWS Certified DevOps Engineer Professionals to remediate our findings.  Download the 2nd Watch Well-Architected Framework Review Datasheet to learn more.  To schedule your free Well-Architected Framework review, contact 2nd Watch today.

 

— Chris Resch, EVP Cloud Solutions, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

AWS re:Invent Keynote Recap – Thursday

Thursday’s General Session Keynote kicked off with Amazon CTO, Werner Vogels, taking the stage to deliver additional product and services announcements with the inclusion of deeper, technical content.  Revisiting his vision for 21st Architectures from the 1st Re:Invent in 2012, Werner focused on what he sees as key guiding principles for next-gen workloads.

  1. Voice represents the next major disruption in computing. Stressing this point, Werner announced the general availability of Alexa for Business to help improve productivity by introducing voice automation into your business.
  2. Use automation to make experimentation easier
  3. Encryption is the ‘key’ to controlling access to your data. As such, encrypting data (at rest and in transit) should be a default behavior.
  4. All the code you should ever write is business logic.

 

Werner also highlighted the fact that AWS now has over 3,951 new services released since 2012. These services were not built for today but built for the workloads of the future.  The goal for AWS, Werner says, is to be your partner for the future.

One of the highlights of the keynote was when Abby Fuller, evangelist for containers at AWS, came on stage to talk about the future of containers at AWS.  She demoed the use of Fargate which is AWS’s fully managed container service. Think of Fargate as Elastic Beanstalk but for containers.  Per AWS documentation “It’s a technology that allows you to use containers as a fundamental compute primitive without having to manage the underlying instances. All you need to do is build your container image, specify the CPU and memory requirements, define your networking and IAM policies, and launch. With Fargate, you have flexible options to closely match your application needs and you’re billed with per-second granularity.”

The Cloud9 acquisition was also a highlight of the keynote.  Cloud9 is a browser-based IDE for developers.  Cloud9 is completely integrated with AWS and you can create cloud environments, develop code, and push that code to your cloud environment all from within the tool.  It’s really going to be useful for writing and debugging lambda functions for developers that have gone all in on serverless technologies.

New Announcements

AWS Lambda Function Execution Activity Logging – Log all execution activity for your Lambda functions. Previously you could only log events but this allows you to log data events and get additional details.

AWS Lambda Doubles Maximum Memory Capacity for Lambda Functions – You can now allocate 3008MB of memory to your AWS Lambda functions.

AWS Cloud9 –  Cloud9 is a cloud based IDE for writing, running, and debugging your code.

API Gateway now supports endpoint integrations with Private VPCs –  You can now provide access to HTTP(S) resources within your Amazon Virtual Private Cloud (VPC) without exposing them directly to the public Internet.

AWS Serverless Application Repository –   The Serverless Application Repository is a collection of serverless applications published by developers, companies, and partners in the serverless community.

We expect AWS to announce many more awesome features and services before the day ends so stay tuned for our AWS  re:Invent 2017 Products & Services Review and 2017 Conference Recap blog posts for a summary of all of the announcements that are being delivered at AWS re:Invent 2017.

 

— Brent Clements, Sr. Cloud Consultant, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

AWS re:Invent Keynote Recap – Wednesday

I have been looking forward to Andy Jassy’s keynote since I arrived in Las Vegas. Like the rest of the nearly 50k cloud-geeks in attendance, I couldn’t wait to learn about all of the cool new services and feature enhancements that will be unleashed that can solve problems for our clients, or inspire us to challenge convention in new ways.

Ok, I’ll admit it. I also look forward to the drama of the now obligatory jabs at Oracle, too!

Andy’s 2017 keynote was no exception to the legacy of previous re:Invents on those counts, but my takeaway from this year is that AWS has been able to parlay their flywheel momentum of growth in IaaS to build a wide range of higher-level managed services. The thrill I once got from new EC2 instance type releases has given way to my excitement for Lambda and event-based computing, edge computing and IoT, and of course AI/ML!

AWS Knows AI/ML

Of all the topics covered in the keynote, the theme that continues to resonate throughout this conference for me is that AWS wants people to know that they are the leader in AI and machine learning. As an attendee, I received an online survey from Amazon prior to the conference asking for my opinion on AWS’s position as a leader in the AI/ML space. While I have no doubts that Amazon has unmatched compute and storage capacity, and certainly has access to a wealth of information to train models, how does one actually measure a cloud provider’s AI/ML competency? Am I even qualified to answer without an advanced math degree?

That survey sure makes a lot more sense to me following the keynote as I now have a better idea of what “heavy lifting” a cloud provider can offload from the traditional process.

Amazon has introduced SageMaker, a fully managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models at any scale. It integrates with S3, and with RDS, DynamoDB, and Redshift by way of AWS Glue. It provides managed Jupyter notebooks and even comes supercharged with several common ML algorithms that have been tuned for “10x” performance!

In addition to SageMaker, we were introduced to Amazon Comprehend, a natural language processing (NLP) service that uses machine learning to analyze text. I personally am excited to integrate this into future chatbot projects, but the applications I see for this service are numerous.

After you’ve built and trained your models, you can run them in the cloud, or with the help of AWS Greengrass and its new machine learning inference feature, you can bring those beauties to the edge!

What is a practical application for running ML inference at the edge you might ask?

Dr. Matt Wood demoed a new hardware device called DeepLens for the audience that does just that! DeepLens is a deep-learning enabled wireless video camera specifically designed to help developers of all skill levels grow their machine learning skills through hands-on computer vision tutorials. Not only is this an incredibly cool device to get to hack around with, but it signals Amazon’s dedication to raising the bar when it comes to AI and machine learning by focusing on the wet-ware: hungry minds looking to take their first steps.

Andy’s keynote included much more than just AI/ML, but to me, the latest AI/ML services that were announced on Tuesday represent the signal of Amazon’s future of higher-level services which will keep them the dominant cloud provider into the future.

 

–Joe Conlin, Solutions Architect, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss