Originally, I thought I’d give a deep dive into the mechanics of some of our automated work flows at 2nd Watch, but I really think it’s best to start at the very beginning. We need to understand “why” we need to automate our delivery. In enterprise organizations, this delivery is usually setup in a very waterfall way. The artifact is handed off to another team to push to the different environments and to QA to test. Sometimes it works but usually not so much. In the “not so much” case, it’s handed back to DEV which interrupts their current work.
That back and forth between teams is known as waste in the Lean/Agile world. Also known as “Throwing it over the wall” or “Hand offs.” This is a primary aspect any Agile process intends to eliminate and what’s led to the DevOps movement.
Now DevOps is a loaded term, much like “Agile” and “SCRUM.” It has its ultimate meaning, but most companies go part way and then call it won. The changes that get the biggest positive effects are cultural, but many look at it and see the shiny “automation” as the point of it all. Automation helps, but the core of your automation should be driven by the culture of quality over all. Just keep that in mind as you read though this article, which is specifically about all that yummy automation.
There’s a Process
Baby steps here. You can’t have one without the other, so there are a series of things that need to happen before you make it to a fully automated process.
Before that though, we need to look at what a deployment is and what the components of a deployment are.
First and foremost, so long as you have your separate environments, development has no impact on the customer, therefore no impact to the business at large. There is, essentially, no risk while a feature is in development. However, the business assumes ALL the risk when there is a deployment. Once you cross that line, the customer will interact with it and either love it, hate it, or ignore it. From a development standpoint, you work to minimize that risk before you cross the deployment line – Different environments, testing, a release process etc. These are all things that can be automated, but only when that risk has been sufficiently minimized.
Step I: Automated testing
You can’t do CI or CD without testing, and it’s the logical first step. In order to help minimize the deployment risk, you should automate ALL of your testing. This will greatly increase your confidence that changes introduced will not impact the product in ways that you may not know about BEFORE you cross that risk point in deployment. The closer an error occurs to the time at which it’s implemented the better. Automated testing greatly reduces this gap by providing feedback to the implementer faster while being able to provide repeatable results.
Step II: Continuous Integration
Your next step is to automate your integration (and integration tests, right?), which should further provide you with confidence in the change that you and your peers have introduced. The smaller the gap between integrations (just as with testing), the better, as you’ll provide feedback to the implementers faster. This means you can operate on any problems while your changes are fresh in your mind. Utilizing multiple build strategies for the same product can help as well. For instance, running integration on a push to your scm (Source Control Management), as well as nightly builds.
Remember, this is shrinking that risk factor before deployment.
Step III: Continuous Deployment
With Continuous Deployment you take traditional Continuous Delivery a step further by automatically pushing the artifacts created by a Continuous Delivery process into production. This automation of deployments is the final step and is another important process in mitigating that risk for when you push to production. Deploying to each environment and then running that environment’s specific set of tests is your final check before you are able to say with confidence that a change did not introduce a fault. Remember, you can automate the environments as well by using infrastructure as code tooling around virtual technology (i.e. The Cloud).
Continuous Deployment is the ultimate goal, as a change introduced into the system triggers all of your confidence-building tools to minimize the risk to your customers once it’s been deployed to the production system. Automating it all not only improves the quality, but reduces the feedback to the implementer, increasing efficiency as well.
I hope that’s a good introduction! In our next post, we’ll take a more technical look at the tooling and automation we use in one of our work flows.
-Craig Monson, Sr Automation Architect (Primary)
-Lars Cromley, Director of Engineering (Editor)
-Ryan Kennedy, Principal Cloud Automation Architect (Editor)
Tech leaders are increasingly turning to the cloud for cost savings and convenience, but getting to the cloud is not so easy. Here are the top 5 pitfalls to avoid when migrating to the cloud.
1. Choosing the wrong migration approach
There are 6 migration strategies, and getting to these 6 takes a considerable amount of work. Jumping into a cloud migration without the due diligence, analysis, grouping and risk ranking is ill-advised. Organizations need to conduct in depth application analyses to determine the correct migration approach. Not all applications are cloud ready and those that are may take some toying with when there. Take the time to really understand HOW your application works, how it will work in the cloud and what needs to be done to migrate it there successfully. 2nd Watch approaches all cloud migrations using our Cloud Factory Model, which includes the following phases – discovery, design and requirement gathering, application analysis, migration design, migration planning and migration(s).
These 6 migration strategies include:
Retain – Leaving it as is. It could be a mistake to move the application to the cloud.
Rehost “aka” Lift and Shift – Migrating the application as-is into the cloud.
Replatform – Characterized as re-imagining how the application is architected and developed, typically using cloud-native features. Basically, you’re throwing away and designing something new or maybe switching over to a SaaS tool altogether.
Retire – No migration target and/or application host decommission on source.
Re-architect/Refactor – Migration of the current application to use the cloud in the most efficient, thorough way possible, incorporating the best features to modernize the application. This is the most complex migration method as it often involves re-writing of code to decouple the application to fully support all the major benefits the cloud provides. The redesign and re-engineering of the application and infrastructure architecture are also key in this type of migration.
From a complexity standpoint, replatform and rearchitect/refactor are the most complicated migration approaches. However, it depends on the application and how you are replatforming (for example, if you’re going to SaaS, it may be a very simple transition. If you’re rebuilding your application on Lambda and DynamoDB, not so much).
2. Big Bang Migration
Some organizations are under the impression that they must move everything at once. This is the furthest from the truth. The reality is that organizations are in hybrid models (On-Prem and Cloud) for a long time because it’s very hard to move some workloads.
It is key to come up with a migration design and plan which includes a strategic portfolio analysis or Cloud Readiness Assessment that assesses each application’s cloud readiness, identifies dependencies between applications, ranks applications by complexity and importance, and identifies the ideal migration path.
3. Underestimating Work Involved and Integration
Migrating to the cloud is not a walk in the park. You must have the knowledge, skill and solid migration design to successfully migrate workloads to the cloud. When businesses hear the words “lift and shift” they are mistakenly under the impression that all one has to do is press a button (migrate) and then it’s “in the cloud.” This is a misnomer that needs to be explained. Underestimating integration is one of the largest factors of failure.
With all of the cheerleading about of the benefits of moving to the cloud, deploying to the cloud adds a layer of complexity, especially when organizations are layering cloud solutions on top of legacy systems and software. It is key to ensure that the migration solution chosen is able to be integrated with your existing systems. Moving workloads to the cloud requires integration and an investment in it as well. Organizations need to have a solid overall architectural design and determine what’s owned, what’s being accessed and ultimately what’s being leveraged.
Lastly, code changes that are required to make move are also often underestimated. Organizations need to remember it is not just about moving virtual machines. The code may not work the same way running in the cloud, which means the subsequent changes required may be deep and wide.
4. Poor Business Case
Determine the value of a cloud migration before jumping into one. What does this mean? Determine what your company expects to gain after you migrate. Is it cost savings from exiting the data center? Will this create new business opportunities? Faster time to market? Organizations need to quantify the benefits before you move.
I have seen some companies experience buyer’s remorse due to the fact that their business case was not multifaceted. It was myopic – exiting the datacenter. Put more focus on the benefits your organization will receive from the agility and ability to enter new markets faster using cloud technologies. Yes, the CapEx savings are great, but the long-lasting business impacts carry a lot of weight as well because you might find that, once you get to the cloud, you don’t save much on infrastructure costs.
5. Not trusting Project Management
An experienced, well versed and savvy project manager needs to lead the cloud migration in concert with the CIO. While the project manager oversees and implements the migration plan and leads the migration process and technical teams, the CIO is educating the business decision makers at the same time. This “team” approach does a number of things. First, it allows the CIO to act as the advisor and consultant to the business – helping them select the right kind of services to meet their needs. Second, it leaves project management to a professional. And lastly, by allowing the project manager to manage, the CIO can evaluate and monitor how the business uses the service to make sure it’s providing the best return on investment.
-Yvette Schmitter, Sr Manager, Project Portfolio Management
Sometimes stories that explode in the media fade just as quickly – tempests in a teapot. But this week’s revelation about two critical flaws in nearly every processor made in the last 20 years is most assuredly not a tempest in a teapot. The tech community will be assessing the implications of these vulnerabilities, dubbed Meltdown and Spectre, for the foreseeable future. And this is especially true for the cloud community.
Most modern CPU, including those from Intel, AMD, and ARM, increase performance through a technique called “speculative execution.” Flaws in processor hardware allow Meltdown and Spectre to take advantage of this technique to access privileged memory — including kernel memory — from a less-privileged user process. There are any number of excellent technical write-ups, including https://arstechnica.com/gadgets/2018/01/meltdown-and-spectre-heres-what-intel-apple-microsoft-others-are-doing-about-it/, with more detail. In short, Meltdown breaks the isolation between the application and the operating system, while Spectre breaks the isolation between applications. Both hardware flaws allow malicious programs to steal data that is being processed in computer memory, including sensitive or secret information such as credentials, cryptographic keys, data being processed by any running program, or opened files.
Of the two vulnerabilities, Meltdown is the more immediate threat with proof-of-concept exploits already available. However, Spectre is much deeper and harder to mitigate, potentially leading to ongoing, subtle exploits for years to come. Worse yet, these hardware flaws can be exploited on any modern operating system including Windows, Linux, macOS, containerization solutions such as Docker, and even some classes of hypervisors.
Much of the press has concentrated on the impact to personal and mobile devices – PCs, tablets, smartphones – but cloud environments, whose very foundation is based on resource isolation, are especially impacted. Since the cloud industry is centered in the Puget Sound, we might say “Seattle, we have a problem.”
Even with hypervisor-centric fixes, it is still critical to update the operating systems running on instances, and thereby improve these operating systems’ abilities to isolate software running within the same instance. All the major CSPs have already installed patches so that all new instances will have the latest version, but existing instances must still be updated. Please note that all AWS instances running Lambda functions have already been patched and no action is required.
If you are a 2nd Watch Managed Cloud customer whose service plan includes patch management, please contact your Technical Account Manager to discuss patch availability and scheduling. These patches are considered high priority. If you are not currently in a service tier in which 2nd Watch manages patching on your behalf, it is urgent that you patch all your operating systems as soon as possible. If you need assistance in doing so, or if you would like to learn more about how we can proactively manage these issues for you, please contact us.
For some IT organizations the cloud computing paradigm poses critical existential questions; How does my IT organization stay relevant in a cloud environment? How does IT still provide value to the business? What can be done to improve the business’ perception of IT’s contribution to the company? Without a clear approach to tackling these and other related questions, IT organizations stumble into a partially thought-out cloud computing strategy and miss out on capturing the short and long-term financial ROI and transformational benefits of a cloud-first strategy.
Several key concepts and principles from ITIL’s Service Strategy lifecycle stage lend themselves to defining and guiding a strategic approach to adopting and implementing a cloud-first strategy. In this article, we’ll highlight and define some of these key principles and outline a tactical approach to implementing a cloud-first strategy.
One of the key concepts leveraged in ITIL’s Service Strategy is the Run-Grow-Transform framework from Gartner. From an executive management perspective, the IT organization’s contribution to the company’s goals and objectives can be framed along the Run-Grow-Transform model – specifically around how IT can help the company (1) Run-The-Business, (2) Grow-The-Business, and (3) Transform-The-Business.
The CIO’s value is both objectively and subjectively measured by answering:
1 – How can IT reduce the cost of current IT operations, thus improving the bottom line?
2 – How can IT help the business expand and gain greater market share with our current business offerings?
3 – How can IT empower the business to venture out into new opportunities and/or develop new competitive business advantage?
We’ll take a close look at each model area, highlight key characteristics, and give examples of how a cloud-first policy can enable a CIO to contribute to the companies’ goals and objectives and not only remain relevant to the organization but enable business innovation.
Run-the-Business and Cloud-First Strategy
Run the Business (RTB) is about supporting essential business operations and processes. This usually translates to typical IT services and operations such as email-messaging systems, HR services, Payroll and Financial systems. The core functionality these IT services provide is necessary and essential but not differentiating to the business. These are generally viewed as basic core commodity services, required IT costs for keeping the business operational.
The CIO’s objective is to minimize the cost of RTB activities without any comprise to the quality of service. A cloud-first policy can achieve these outcomes. It can reduce costs by moving low value-add IT activities (sometimes referred to as ‘non-differentiating work’) to a cloud provider that excels at performing the same work with hyper efficiency. Add in the ability of a cloud provider to leverage economies of scale and you have a source of reliable, highly cost-optimized IT services that cannot be matched by any traditional data center or hosting provider (see AWS’s James Hamilton discuss data center architecture at scale). Case studies from GE, Covanta, and Conde Nast bare out the benefit of moving to AWS and enabling their respective CIOs to improve their business’ bottom line.
Grow-the-Business and Cloud First Strategy
Grow the Business (GTB) activities are marked by enabling the business to successfully increase market share and overall revenue in existing markets. If a company doubles its customer base, then the IT organization responds with timely and flexible capacity to support such growth. Generally, an increase in GTB spending should be tied to an increase in business revenue.
Cloud computing providers, such as AWS, are uniquely capable to support GTB initiatives. AWS’ rapid elasticity drastically alters the traditional management of IT demand and capacity. A classic case in point is the “Black Friday” phenomena. If the IT organization does not have sufficient IT resources to accommodate the projected increase in business volume, then the company risks missing out on revenue capture and may experience a negative brand impact. If the IT organization overprovisions its IT resources, then unnecessary costs are incurred and it adversely affects the company’s profits. Other similar business phenomena include “Cyber Monday,” Super Bowl Ads, and product launches. Without a highly available and elastic cloud computing environment, IT will struggle to support GTB activities (see AWS whitepaper “Infrastructure Event Readiness” for a similar perspective).
A cloud’s elasticity solves both ends of the spectrum scenarios by not only being able to ramp up quickly in response to increased business demand, but also scale down when demand subsides. Additionally, AWS’ pay-for-what-you-use model is a powerful differentiating feature. Some key uses cases include Crate & Barrel and Coca-Cola. Through a cloud-first strategy, a CIO is able to respond to GTB initiatives and activities in a cost-optimized manner.
Transform-the-Business and Cloud Computing
Transform the Business (TTB) represents opportunities for a company to make high risk but high reward investments. This usually entails moving into a new market segment with a new business or product offering. Innovation is the key success factor in TTB initiatives. Traditionally this is high risk to the business because of the upfront investment required to support new business initiatives. But in order to innovate, IT and business leaders need to experiment, to prototype and test new ideas.
With a cloud-first policy, the IT organization can mitigate the high-risk investment, yet still obtain the high rewards by enabling a ‘fail early, fail fast’ strategy in a cloud environment. Boxever is a case study in fail fast prototyping. Alan Giles, CTO of Boxever, credits AWS with the ability to know within days “if our design and assumptions [are] valid. The time and cost savings of this approach are nearly incalculable, but are definitely significant in terms of time to market, resourcing, and cash flow.” This cloud-based fail-fast approach can be applied to all market-segments, including government agencies. The hidden value in a cloud-based fail fast strategy is that failure is affordable and OK, making it easier to experiment and innovate. As Richard Harshman, Head of ASEAN for Amazon Web Services, puts it, “Don’t be afraid to experiment. The cloud allows you to fail fast and fail cheap. If and when you succeed, it allows you to scale infinitely and go global in minutes”.
So what does a cloud-first strategy look like?
While this is a rudimentary, back-of-the-envelope style outline, it provides a high-level, practical methodology for implementing a cloud-first based policy.
For RTB initiatives: Move undifferentiated shared services and supporting services to the cloud, either through Infrastructure-as-a-Service (IaaS) or Software-as-a-Service (SaaS) based solutions.
For GTB initiatives: Move customer-facing services to the cloud to leverage dynamic supply and demand capacity.
For TTB initiatives: Set up and teardown cloud environments to test and prototype new ideas and business offerings at minimal cost.
In addition to the Run-Grow-Transform framework, the ITIL Service Strategy lifecycle stage provides additional guidance from its Service Portfolio Management, Demand Management, and Financial Management process domains that can be leveraged to guide a cloud-first based strategy. These principles, coupled with other related guidance such as AWS Cloud Adoption Framework, provide a meaningful blueprint for IT organizations to quickly embrace a cloud-first strategy in a structured and methodical manner.
By aggressively embracing a cloud-first strategy, CIOs can demonstrate their business relevance through RTB and GTB initiatives. Through TTB initiatives IT can facilitate business innovation and transformation, yielding greater value to their customers. We are here to help our customers, so if you need help developing a cloud-first strategy, contact us here.
In cloud migrations, the elastic nature of the cloud is often touted as a critical capability in delivering on a business’ key initiatives. However, if not accounted for in your Security and Compliance plans, you could be facing some real challenges. Always counting on a virtual host to be running, for example, will cause issues when that host is rebooted or retired. This is why managing Security and Compliance in the cloud is a continuous action requiring both forethought and automation.
At AWS re:Invent 2017, 2nd Watch hosted a breakout session titled “Continuous Compliance on AWS at Scale” where attendees learned how a leading, next generation, Managed Cloud Provider uses automation and cloud expertise to successfully manage Security and Compliance at scale in an ever-changing environment. This journey starts with account creation, goes through deployment of infrastructure and code and never ends.
Through code examples and live demos, presenters Peter Meister and Lars Cromley demonstrated the tools and automation you can use to provide continuous compliance of your cloud infrastructure from inception to ongoing management. In case you missed the session or simply wish to get a refresher on the content that was presented, you can now view the breakout session recording below.
While AWS re:Invent 2017 is still fresh in our minds, here are some of the highlights of the most significant announcements.
Aurora Multi-Master/Multi-Region: This is a big deal! The concept of geographically distributed databases with multiple masters has been a long-desired solution. Why is this important?
Having additional masters allows for database writes, not just reads like the traditional read replicas that have been available. This feature enables a true multi-region, highly available solution that eliminates a single point of failure and achieves optimum performance. Previously, third party tools like Golden Gate and various log shipping approaches were required to accomplish proper disaster recovery and high availability. This will greatly simplify architectures for some that want to go active-active across regions and not just availability zones. Additionally, it will enable pilot light (and more advanced) DR scenarios for customers that are not going to be using active-active configurations.
Aurora Serverless: Aurora Serverless is an on-demand, auto-scaling configuration for the Aurora MySQL and PostgresSQL compatible database service, where the database will automatically start-up and scale up or down based on your application’s capacity needs. It will shut down when required, basically scaling down to zero when not being used. Traditionally, Aurora RDS required changing the underlying instance type to scale for database demand. This is a large benefit and cost saver for development, testing, and QA environments. Even more importantly, if your workload has large spikes in demand, then auto-scaling is a game changer in the same way that EC2 auto scaling enabled automated compute flexibility.
T2 Unlimited: T2 is one of the most popular instance types used by 2nd Watch and AWS customers, accounting for around 50% of all instances under 2nd Watch Managed Cloud Services. In the case of frequent, small and inconsistent workloads, T2 is the best price and performance option. However, one of the most common reasons that customers do not heavily leverage T2 is due to concerns related to a sustained spike in load that will deplete burstable credits and result in unrecoverable performance degradation. T2 unlimited solves this problem by essentially allowing unlimited surges over the former limits. We expect to see more customers will adopt T2 for those inconsistent workloads as a cost-effective solution. We will watch to see if this this shift is reflected in the instance type data for accounts being managed by 2nd Watch.
Spot Capacity: Spot instances are normally used as pools of compute that run standard AMIs and work on datasets located outside of EC2. This is because the instances are terminated when the spot price increases beyond your bid, and all data is lost. Now, when AWS reclaims the capacity, the instance can essentially hibernate, preserving the operating system and data, and startup again when the spot pricing is favorable. This removes another impediment in the use of spot capacity, and will be a large cost saver for environments that only need to be temporarily available.
M5 Instance Type: Given the large increase in performance of the newer processor generations, one can see large cost savings and performance improvements by migrating to a smaller sized offering of the latest instance type that meets your application’s needs. Newer instance types can also offer higher network bandwidth as well, so don’t put off the adoption of the latest products if possible.
Inter-region Peering: It’s always been possible to establish peering relationships between VPCs in the same region. Inter-region Peering uses AWS private links between VPCs in different availability zones and does not transit the open internet, eliminating VPNs, etc. This same feature is available inter-region. This makes multi-region designs cleaner and easier to implement, without having to build and configure VPN networking infrastructure to support it, which of course also needs monitoring, patching, and other maintenance. It was also announced that users of Direct Connect can now route traffic to almost every AWS region from a single Direct Connect circuit.
There were also some announcements that we found interesting but need to digest a little longer. Look for a follow up from us on these.
EKS: Elastic Container Services for Kubernetes – Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes clusters. Even at last years’ AWS re:Invent we heard people wondering where the support for Kubernetes was, particularly since it has become the de facto industry standard over the past several years.
GuardDuty: AWS has now added a cloud-native tool to the security toolbox. This tool utilizes “machine learning” for anomaly detection. AWS GuardDuty monitors traffic flow and API logs for your accounts, letting you establish a baseline for “normal” behavior on your infrastructure, and then watches for security anomalies. These are reported with a severity rating, and remediation for certain types of events can be automated using existing AWS tools. We will be considering the best methods of implementation of this new tool.
Fargate: Run Amazon EKS and ECS without having to manage servers or clusters.
Finally, a shameless plug: If compliance is on your mind, watch this AWS re:Invent breakout session from our product and engineering experts.
Peter Meister, Director of Product Management, 2nd Watch
Lars Cromley, Director of Engineering, 2nd Watch
In cloud migrations, the cloud’s elastic nature is often touted as a critical capability in delivering on key business initiatives. However, you must account for it in your security and compliance plans or face some real challenges. Always counting on a virtual host to be running, for example, causes issues when that host is rebooted or retired. Managing security and compliance in the cloud is continuous, requiring forethought and automation. Learn how a leading, next generation managed cloud provider uses automation and cloud expertise to manage security and compliance at scale in an ever-changing environment. Through code examples and live demos, we show tools and automation to provide continuous compliance of your cloud infrastructure.
Obviously, there was a lot more going on and it will take some time to go through it. We will keep you up to date with our thoughts.
“Whatever you do in life, surround yourself with smart people who argue with you.” – John Wooden
Many AWS customers and practitioners have leveraged the Well-Architected Framework methodology in building new applications or migrating existing applications. Once a build or migration is complete, how many companies implement Well-Architected Framework reviews and perform those reviews regularly? We have found that many companies today do not conduct regular Well Architected Framework reviews and as a result, potentially face a multitude of risks.
What is a Well-Architected Framework?
The Well-Architected Framework is a methodology designed to provide high-level guidance on best practices when using AWS products and services. Whether building new or migrating existing workloads, security, reliability, performance, cost optimization, and operational excellence are vital to the integrity of the workload and can even be critical to the success of the company. A review of your architecture is especially critical when the rate of innovation of new products and services are being created and implemented by Cloud Service Providers (CSP).
2nd Watch Well-Architected Framework Reviews
At 2nd Watch, we provide Well-Architected Framework reviews for our existing and prospective clients. The review process allows customers to make informed decisions about architecture decisions, the potential impact those decisions have on their business, and tradeoffs they are making. 2nd Watch offers its clients free Well-Architected Framework reviews—conducted on a regular basis—for mission-critical workloads that could have a negative business impact upon failure.
Examples of issues we have uncovered and remediated through Well-Architected Reviews:
Security: Not protecting data in transit and at rest through encryption
Cost: Low utilization and inability to map cost to business units
Reliability: Single points of failure where recovery processes have not been tested
Performance: A lack of benchmarking or proactive selection of services and sizing
Operations: Not tracking changes to configuration management on your workload
Using a standard based methodology, 2nd Watch will work closely with your team to thoroughly review the workload and will produce a detailed report outlining actionable items, timeframes, as well as provide prescriptive guidance in each of the key architectural pillars.
Thursday’s General Session Keynote kicked off with Amazon CTO, Werner Vogels, taking the stage to deliver additional product and services announcements with the inclusion of deeper, technical content. Revisiting his vision for 21st Architectures from the 1st Re:Invent in 2012, Werner focused on what he sees as key guiding principles for next-gen workloads.
Voice represents the next major disruption in computing. Stressing this point, Werner announced the general availability of Alexa for Business to help improve productivity by introducing voice automation into your business.
Use automation to make experimentation easier
Encryption is the ‘key’ to controlling access to your data. As such, encrypting data (at rest and in transit) should be a default behavior.
All the code you should ever write is business logic.
Werner also highlighted the fact that AWS now has over 3,951 new services released since 2012. These services were not built for today but built for the workloads of the future. The goal for AWS, Werner says, is to be your partner for the future.
One of the highlights of the keynote was when Abby Fuller, evangelist for containers at AWS, came on stage to talk about the future of containers at AWS. She demoed the use of Fargate which is AWS’s fully managed container service. Think of Fargate as Elastic Beanstalk but for containers. Per AWS documentation “It’s a technology that allows you to use containers as a fundamental compute primitive without having to manage the underlying instances. All you need to do is build your container image, specify the CPU and memory requirements, define your networking and IAM policies, and launch. With Fargate, you have flexible options to closely match your application needs and you’re billed with per-second granularity.”
The Cloud9 acquisition was also a highlight of the keynote. Cloud9 is a browser-based IDE for developers. Cloud9 is completely integrated with AWS and you can create cloud environments, develop code, and push that code to your cloud environment all from within the tool. It’s really going to be useful for writing and debugging lambda functions for developers that have gone all in on serverless technologies.
AWS Lambda Function Execution Activity Logging – Log all execution activity for your Lambda functions. Previously you could only log events but this allows you to log data events and get additional details.
AWS Lambda Doubles Maximum Memory Capacity for Lambda Functions – You can now allocate 3008MB of memory to your AWS Lambda functions.
AWS Cloud9 – Cloud9 is a cloud based IDE for writing, running, and debugging your code.
API Gateway now supports endpoint integrations with Private VPCs – You can now provide access to HTTP(S) resources within your Amazon Virtual Private Cloud (VPC) without exposing them directly to the public Internet.
AWS Serverless Application Repository – The Serverless Application Repository is a collection of serverless applications published by developers, companies, and partners in the serverless community.
We expect AWS to announce many more awesome features and services before the day ends so stay tuned for our AWS re:Invent 2017 Products & Services Review and 2017 Conference Recap blog posts for a summary of all of the announcements that are being delivered at AWS re:Invent 2017.
I have been looking forward to Andy Jassy’s keynote since I arrived in Las Vegas. Like the rest of the nearly 50k cloud-geeks in attendance, I couldn’t wait to learn about all of the cool new services and feature enhancements that will be unleashed that can solve problems for our clients, or inspire us to challenge convention in new ways.
Ok, I’ll admit it. I also look forward to the drama of the now obligatory jabs at Oracle, too!
Andy’s 2017 keynote was no exception to the legacy of previous re:Invents on those counts, but my takeaway from this year is that AWS has been able to parlay their flywheel momentum of growth in IaaS to build a wide range of higher-level managed services. The thrill I once got from new EC2 instance type releases has given way to my excitement for Lambda and event-based computing, edge computing and IoT, and of course AI/ML!
AWS Knows AI/ML
Of all the topics covered in the keynote, the theme that continues to resonate throughout this conference for me is that AWS wants people to know that they are the leader in AI and machine learning. As an attendee, I received an online survey from Amazon prior to the conference asking for my opinion on AWS’s position as a leader in the AI/ML space. While I have no doubts that Amazon has unmatched compute and storage capacity, and certainly has access to a wealth of information to train models, how does one actually measure a cloud provider’s AI/ML competency? Am I even qualified to answer without an advanced math degree?
That survey sure makes a lot more sense to me following the keynote as I now have a better idea of what “heavy lifting” a cloud provider can offload from the traditional process.
Amazon has introduced SageMaker, a fully managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models at any scale. It integrates with S3, and with RDS, DynamoDB, and Redshift by way of AWS Glue. It provides managed Jupyter notebooks and even comes supercharged with several common ML algorithms that have been tuned for “10x” performance!
In addition to SageMaker, we were introduced to Amazon Comprehend, a natural language processing (NLP) service that uses machine learning to analyze text. I personally am excited to integrate this into future chatbot projects, but the applications I see for this service are numerous.
After you’ve built and trained your models, you can run them in the cloud, or with the help of AWS Greengrass and its new machine learning inference feature, you can bring those beauties to the edge!
What is a practical application for running ML inference at the edge you might ask?
Dr. Matt Wood demoed a new hardware device called DeepLens for the audience that does just that! DeepLens is a deep-learning enabled wireless video camera specifically designed to help developers of all skill levels grow their machine learning skills through hands-on computer vision tutorials. Not only is this an incredibly cool device to get to hack around with, but it signals Amazon’s dedication to raising the bar when it comes to AI and machine learning by focusing on the wet-ware: hungry minds looking to take their first steps.
Andy’s keynote included much more than just AI/ML, but to me, the latest AI/ML services that were announced on Tuesday represent the signal of Amazon’s future of higher-level services which will keep them the dominant cloud provider into the future.
The sixth annual AWS re:Invent is less than a week away, taking place November 27-December 1 in Las Vegas, Nevada. Designed for AWS customers, enthusiasts, and even cloud computing newcomers. The nearly week-long conference is a great source of information and education for attendees of all skill levels. AWS re:Invent is THE place to connect, engage, and discuss current AWS products and services via breakout sessions ranging from introductory and advanced to expert as well as to hear the news and announcements from key AWS executives, partners, and customers. This year’s agenda offers a full additional day of content, boot camps, hands-on labs, workshops, new Alexa Hack Your Office and Smart Cities hackathons, a Robocar Rally, and the first ever Deep Learning Summit. Designed for developers to learn about the latest in deep learning research and emerging trends, attendees of the Deep Learning Summit will hear from members of the academic and venture capital communities who will share their perspectives in a series of thirty-minute lightening talks. To offer all of its great educational content, networking opportunities and recreational activities, AWS is practically taking over the Las Vegas strip, offering an expanded campus with a larger re:Invent footprint and more venues (not to mention a shuttle service!).
2nd Watch is proud to be a 2017 Platinum Sponsor and attending AWS re:Invent for the sixth consecutive year. With every re:Invent conference we attend, we continue to gain unique insight into what attendees can expect. Similar to last year, our seasoned re:Invent alumni have compiled a list of TheTop 7 Things to Avoid at re:Invent 2017 and we hope you find the following information useful as you prepare to attend AWS re:Invent next week.
1. Avoid the long lines at Registration (and at the Swag Counter!)
The re:Invent Registration Desk will open early again this year starting Sunday afternoon from 1pm-10pm, giving attendees a few extra hours to check in and secure their conference badges. Registration Desks are located in four locations this year—Aria, MGM Grand, Mirage, and The Venetian—so no matter where your hotel room is along the strip, you’re sure to find a Registration Desk close by. This is particularly helpful so that you don’t have to schlepp around all that conference swag you will receive upon check in. As always, you can’t attend any part of re:Invent until you have your conference badge so be sure you check into Registration as early as possible. This will also ensure that you get the size shirt you want from the Swag Counter!
Expert Tip: Like last year, AWS has added an additional level of security and will be printing each attendee’s photograph onto their badge. Avoid creating a backlog at the registration line because you have to have your photo taken on site. Take a few minutes to upload your photo prior to re:Invent here. BONUS: By uploading your own photo, you make sure to put your best face forward for the week.
2. Avoid Arriving Without a Plan:
The worst thing you can do at re:Invent is show up without a plan for how you will spend your week in Vegas—that includes the breakout sessions you want to attend. With expanded venues and a total of over 1,000 sessions (twice as many as 2016), more hands-on labs, boot camps and one-on-one engagement opportunities, AWS re:Invent 2017 offers more breadth and depth and more chances to learn from the experts than ever before.
If you haven’t already done so, be sure to check out the AWS Event Catalogue and start selecting the sessions that matter most to you. While you’re building your session schedule, might I recommend adding 2nd Watch’s breakout session—Continuous Compliance on AWS at Scale—to your list of must attend sessions? This session will be led by cloud security experts Peter Meister and Lars Cromley and will focus on the need for continuous security and compliance in cloud migrations. Attendees will learn how a managed cloud provider can use automation and cloud expertise to successfully control these issues at scale in a constantly changing cloud environment. Find it in the Event Catalog by searching for SID313 and then add it to your session agenda. Or, click here to skip the search and go directly to the session page.
Expert Tip: Be sure to download the AWS re:Invent Mobile App. Leveraging the mobile app is like having your own, personal re:Invent assistant for the week and will hold your re:Invent schedule, maps from venue to venue, all other activities and reminders, providing a super helpful resource as you navigate the conference. Android users click here to download. Apple users click here to download.
3. Avoid Avoiding the Waitlist
AWS re:Invent 2017 is SOLD OUT and we anticipate nearly 50,000 people to be in attendance this year. That means, if you haven’t already built your session agenda for the week, you’re likely to find that the *ONE SESSION* you needed to attend is already at capacity. Avoid missing out on sessions by adding yourself to the waitlist for any sessions that you really want to attend. You will be surprised by the number of people that “no-show” to sessions that they have registered for so don’t be afraid to stand in line for that all-too-important session.
4. Avoid Not Knowing Where to Go
As mentioned previously, the re:Invent campus has expanded, yet again, this year and there are a few more venues to note when preparing your event schedule. Spanning the length of the Las Vegas strip, events will occur at the MGM Grand, Aria, Mirage, Venetian, Palazzo, Sands Expo Hall, the Linq Parking Lot, and the Encore. Each venue will host tracts devoted to specific topics so to help you get organized—and map out your week, literally—here’s what you can expect to find at each venue:
MGM Grand: Business Apps, Enterprise, Security, Compliance, Identity, and Windows. Aria: Analytics & Big Data, Alexa, Container, IoT, AI & Machine Learning, and Serverless. Mirage: Bootcamps, Certifications, and Certification Exams. Venetian / Palazzo / Sands Expo Hall: Architecture, AWS Marketplace & Service Catalog, Compute, Content Delivery, Database, DevOps, Mobile, Networking, and Storage. Linq Lot: Alexa Hackathons, Gameday, Jam Sessions, re:Play Party, and Speaker Meet & Greets. Encore:Bookable meeting space.
Once you’ve nailed down where you need to be, be sure to allow enough time to get from session to session. While there are breaks between sessions, some venues can be a bit of a hike from others so be sure to plan accordingly. You’ll want to factor in the time it takes to walk between venues as well as the number of people that will be doing the same. As re:Invent continues to grow in size, you can certainly expect that escalators, elevators, hallways, sidewalks and lengthy shuttle lines are going to be difficult to navigate. To help you get a sense of standard walking times between venues, AWS has put together a nifty chart that details all the travel information you might need (minus any stops on casino floors or crowds of folks clogging your path).
This year, AWS is offering a shuttle service between venues if you don’t want to walk or need to get to your next destination quickly.
AWS recommends allowing yourself 30 minutes to travel between venues and is providing the following shuttle schedule to help you get from Point A to Point B:
Sunday, November 26: 12PM-1:30AM Monday, November 27: 6AM-12:30AM Tuesday, November 28: 6AM-10PM Wednesday, November 29: 6AM-12:30AM Thursday, November 30: 6AM-12:30AM Friday, December 1: 6AM-3PM
NOTE: BELLAGIO SHUTTLES RUN ONLY DURING AM AND PM PEAK HOURS (SUNDAY 10PM — 1:30AM, MONDAY — THURSDAY 6AM — 10AM & 4PM — 7:30PM, FRIDAY 6AM — 10AM)
Expert Tip: If you need to get from The Palazzo to The Venetian and want to avoid navigating the casino floors, restaurant row and the crowds around the entrance to The Sands Convention Center, head to the Canyon Ranch Spa from either hotel. From the Palazzo, the spa is located on the 4th floor and from the Venetian it is located on the 3rd floor. The spa connects both venues through a series of long, colorful and rarely traveled corridors making the trip quick and easy for those who don’t mind taking the road less traveled. Not to mention, this route can also offer a moment of peaceful sanity!
5. Avoid Sleeping In, Being Late, or Skipping Out Entirely
With so many learning and networking opportunities, it’s easy to get caught up in exciting—yet exhaustive—days full of breakout sessions, hands-on labs, training sessions, and of course, after-hours activities and parties. Only you know how to make the most of your time at re:Invent, but if we can offer some advice…be sure to get plenty of sleep and avoid sleeping in, getting to sessions late or worse…skipping out on morning sessions entirely. Especially when it comes to the keynote sessions on Wednesday and Thursday morning!
AWS CEO, Andy Jassy, will present the Wednesday morning keynote, while Amazon CTO, Werner Vogels, will present on Thursday morning. Both Keynotes will be full of exciting product announcements, enhancements, and feature additions as well as cool technical content and enterprise customer success stories. Don’t be the last to know because you inadvertently over slept and/or partied a little too hard the night before!
Customers don’t need to reserve a seat in either of the keynotes, however, there is a cap on the number of attendees who can watch the keynote in the keynote hall. Keynotes are offered on a first come, first served basis, so be sure to get there early.
Expert Tip: If you don’t want to wait in line to sit in the keynote hall, AWS will have many options for watching the keynote in overflow rooms. If you’re still groggy from the previous night’s events, the overflow rooms are an ideal place where you can watch the keynote with a bloody mary, mimosa, or coffee.
6. Avoid Being Anti-Social
AWS re:Invent is one of the best locations to network and connect with like-minded peers and cloud experts, discover new partner offerings and, of course, let loose at the quirky after-hours experiences, attendee parties, and partner-sponsored events.
Avoid being anti-social by taking advantage of the many opportunities to network with others and meet new people. AWS has some great activities planned for conference goers. To help you play hard while working harder, here is a list of all the fun activities that are planned for re:Invent 2017:
When: Sunday, November 26, 12PM-6PM
Where: The Venetian
Robocar Rally Mixer
When: Sunday, November 26, 6PM-10PM
Non-Profit Hackathon Mixer
When: Sunday, November 26, 7PM-9PM
Where: The Venetian
When: Sunday, November 26, 10:30PM-1AM
Where: The Venetian
AWS re:Invent 4K
When: Tuesday, November 28, 6AM-8AM
Where: The Mirage
When: Tuesday, November 28, 5PM-7PM
Where: The Venetian & The Linq Lot
When: Tuesday, November 28, 5PM-7PM
Where: The Linq Lot
Cyclenation Spin Challenge
When: Wednesday, November 29, (three timeslots) 7AM, 8AM, 5PM
Where: The Mirage
When: Wednesday, November 29, 5:30PM-7:30PM
Where: MGM Grand & The Venetian
When: Wednesday, November 29, 5:30PM-7:30PM
Where: MGM Grand & The Venetian
2nd Watch After Party
When: Wednesday, November 29, 9PM-12AM
Where: Rockhouse at the Palazzo Click here to be added to the 2nd Watch After Party Waitlist (see “What to Avoid #3 above if you’re hesitant to be waitlisted)!
When: Thursday, November 30, (three timeslots) 7AM, 8AM, 5PM
Where: The Mirage
When: Thursday, November 30, 8PM-12AM
Where: The Park at the Linq Lot
Expert Tip: Don’t forget to bring plenty of business cards. With so many people to meet, opportunities to connect with peers and experts, and after-hours parties to attend, you’ll want to make sure to pack extra cards to avoid running out early in the week. When you receive a business card from someone else, try to immediately take a photo of it with your smartphone and save it to a photo album dedicated solely to networking. This will ensure that you have the details stored somewhere should you happen to misplace an important contact’s business card.
7. Avoid Forgetting to Pack That All-Too-Important Item
Whether you’re staying at The Venetian, Mirage, Encore, or other property, your hotel room will be your home away from home for nearly an entire week. Of course, every hotel will have in-room amenities and travel essentials, but inevitably, we all will forget that one important item that we won’t be able to live without, especially for an entire week. Our experts have pulled together a check list to help you pack for your trip and ensure you have all the comforts of home and office during your week in Vegas.
Your Favorite Toiletries:
Not everyone is in love with the in-room toiletries that hotels have to offer in each of their suites. If you have a favorite, be sure to bring it. Here is a quick list to ensure you don’t forget something:
Hair Styling Products (if that’s your thing)
Contact Case & Solution
Spare Pair of Contacts
Whether your headache or hangover cure calls for Aspirin, Ibuprophen, or something stronger, it’s a good idea to pack your preferred treatment along with any other first aid remedies and prescription medications you might need. Band Aids, blister protectors, and anti-histamines are also recommended.
Chapstick & Lotion:
It is the desert, after all, and with dry air circulating throughout the venues, your skin (including your lips) is bound to dry out. We recommend bringing medicated Chapstick and fragrance-free lotion (fragrances in most lotions can often dry out your skin even more!) and keeping a spare with you at all times.
Breath Mints and/or Mint-flavored Gum:
No explanation necessary.
This is a repeat from one of our other tips but an important one to remember, so we don’t mind mentioning it again.
Chargers & Battery Packs:
Nothing is worse than being in between sessions with a 10% cell phone or laptop battery and realizing you left your chargers back in your room. We recommend bringing at least two phone chargers and two laptop chargers: One for your room and one for the backpack or briefcase you’ll be carrying throughout the conference. Additionally, while there will be several charging stations throughout re:Invent (and outlets on most every wall), it’s a good idea to bring a battery pack with several hours of charging time just in case you can’t find an open spot to plug in.
You will definitely want to stay hydrated throughout the week, and the tiny cups offered at the water stations just won’t quench your thirst quite the way you will need them to. It’s a good idea to pack a water bottle (we recommend one that can hold 17 oz) so that you avoid having to refill often and have plenty of thirst-quenching liquid to keep you hydrated throughout the day.
Your shoes will be your saving grace by the end of each day, so be sure to bring a pair or two that you feel comfortable walking several thousands of steps in.
While business casual attire is often recommended at re:Invent, there can be many interpretations of what is appropriate. Our advice is to pack clothing that you would feel confident wearing should you run into your boss or someone you wish to impress. Jeans are perfectly acceptable in either case, but make sure to use good judgement overall when selecting your attire for sessions, dinners and parties you plan to attend.
In addition to needing cash for meals on the go, bar tabs or that faux diamond-encrusted figurine you’ve been eyeing in the gift shop, you’ll want to bring a little extra cash if you plan to try your luck at the casinos. There are ATMs on the casino floors, but they typically charge a service fee in the amount of $3-$5 in addition to your bank’s own service fees.
Notebook & Pen/Pencil:
It’s always a good idea to bring a good ole’ fashioned notebook with you to your sessions. Not only is it a fail-proof way to capture the handy tips and tricks you’ll be learning, it’s also the quietest way to track those notable items that you don’t want to forget. Think about it – if 100 people in your breakout session were all taking notes on a laptop, it would be pretty distracting. Be bold. Be respectful. Be the guy/gal that uses paper and a pen.
A Few Final Thoughts
Whether this is your first trip to AWS re:Invent or you’re a seasoned re:Invent pro, you’re sure to walk away with an increased knowledge of how cloud computing can better help your business, tips and tricks for navigating new AWS products and features, and a week’s worth of memories that will last a lifetime. We hope you make the most of your re:Invent 2017 experience and take advantage of the incredible education and networking opportunities that AWS has in store this year.
Last but certainly not least, we hope you take a moment during your busy week to visit 2nd Watch in booth #1104 of the Expo Hall where we will be showcasing our customers’ successes. You can explore 2nd Watch’s Managed Cloud Solutions, pick up a coveted 2nd Watch t-shirt and find out how you can win one of our daily contest giveaways—a totally custom, totally rad 2nd Watch skateboard!
Expert Tip: Make sure you get time with one of 2nd Watch’s Cloud Journey Masters while at re:Invent. Plan ahead and schedule a meeting with one of 2nd Watch’s AWS Professional Certified Architects, DevOps, or Engineers. Last but not least, 2nd Watch will be hosting its annual re:Invent after party on Wednesday, November 29. If you haven’t RSVP’d for THE AWS re:Invent Partner Party, click here to request to be added to our waitlist. We look forward to seeing you at AWS re:Invent 2017!