By now you’ve likely heard of VMware Cloud on AWS, either from the first announcement of the offering, or more recently as activity in the space has been heating up since the product has reached a state of maturity. On-premises, we loved what VMware could do for us in terms of ease of management and the full utilization of hardware resources. However, in the cloud the push for native services is ever present, and many first reactions about VMC are “Why would you do that?” This is certainly the elephant in the room whenever the topic arises. Previous experience with manually deployed VMware in the AWS cloud required nested virtualization and nearly the same care and feeding as on-premises. This further adds to initial reaction. Common sense would dictate however, that if the two 800-pound gorillas come together in the room, they may be able to take on the elephant in the room! As features have been added to the product and customer feedback implemented, it has become more and more compelling for the enormous installed base of VMware to take advantage of the offering.
Some of the most attractive features of the cloud are the managed services, which reduce the administrative overhead normally required to maintain reliable and secure operations. Let’s say you want to use SQL Server in AWS. Moving to the RDS service where there is no maintenance, configuration or patching of the underlying server is an easy decision. After some time, the thought of configuring a server and installing/maintaining a RDBMS seems archaic and troublesome. You can now have your DBA focus on the business value that the database provides. VMware Cloud on AWS is no different. The underlying software and physical hardware is no longer a concern. One can always be on the optimum version of the platform with no effort, and additional hardware can be added to a cluster at the press of a button. So, what software/service helps manage and control the entirety of your IT estate? There are many third-party software solutions, managed service providers, and up and coming native services like Simple Systems Manager. Now imagine a cloud based managed service that works for on-premises and cloud resources, and has an existing, mature ecosystem where nearly everyone in Enterprise IT has basic to advanced knowledge. Sounds attractive, doesn’t it? That is the idea behind VMC.
The architecture of VMC is based on dedicated bare metal systems that are physically located in AWS datacenters. VMware Cloud on AWS Software Defined Datacenters (SDDCs) are deployed with a fully configured vSAN running on NVMe Flash storage local to the cluster, which currently can expand up to 32 nodes. You are free to provision the hosts anyway you see fit. This arrangement also allows full access to AWS services, and keeps resources in the same low latency network. There is also a connector between the customer’s AWS account and the VMC SDDC, allowing direct low latency access to existing AWS resources in a client VPC. For management, the hybrid linked mode gives a single logical view spanning both on-premises and VMC vCenter servers. This allows control of the complete hybrid environment with vCenter and the familiar web console.
Figure 1. VMware Cloud on AWS Overview
Below are some selected capabilities, benefits, and general information:
There is no immediate requirement for refactoring of existing applications, but access to AWS services allows for future modernization.
Very little retraining of personnel is required. Existing scripts, tools and workflows are reusable.
Easy expansion of resource footprint without deploying more physical infrastructure.
Easy migration of VMs across specific geographies or between cloud/premises for compliance and latency reasons.
VMware native resiliency and availability features are fully supported: including DRS for workload distribution, shared storage for clustered application support, and automatic VM restart after node failure.
DR as a service with Site Recovery is supported, including the creation of stretched clusters. This can provide zero-RPO between AZ’s within the AWS region. This service takes advantage of the AWS infrastructure which is already designed with high availability in mind.
VMware Horizon 7 is fully supported. This can extend on-premises desktop services without buying additional hardware and enables placement of virtual desktops near latency-sensitive applications in the cloud.
The service has GDPR, HIPAA, ISO, and SOC attestations to enable the creation of compliant solutions.
Region expansion is underway and two new regions have recently come online in Europe.
Discounts are available based on existing product consumption and licensing.
Integration with CloudFormation for automated deployment is available.
Figure 2: VMware Cloud on AWS Target use cases
So for those currently using VMware and considering a move to the cloud and/or hybrid architecture, VMC offers the most straightforward gateway into this space. The service then brings all the hundreds of services in the AWS ecosystem into play, as well as a consistent operational model, the ability to retain familiar VMware tools, policies, management, and investments in third-party tools. So instead of planning and executing your next hardware refresh and VMware version upgrade, consider migrating to VMware Cloud on AWS! For help getting started, contact us.
In my previous blog I gave a fairly high-level overview of what automated AWS account management could (or rather should) entail. This blog will drill deeper into the processes and give you some real-world code samples of what this looks like.
AWS Organizations and Linked Account Creation:
As mentioned in my last blog, AWS recently announced the general availability of AWS Organizations, allowing you to create linked or nested AWS accounts under a master account and apply policy-based management under the umbrella of the root account. It also allows for hierarchical management (up to five levels deep) of linked accounts by Organizational Units (OU). Policies can be applied at the global level, OU level, and individual account level. It is important to note that conflicting policies always defer to the parent entities permission set. Meaning an IAM user/role in account may have permissions to perform some action, but, if at the Organizations level the account, OU, or global settings deny those actions, the resulting action for the IAM resource will be denied. Likewise, the effective permissions for a resource are a union of the resource’s direct permissions assigned in IAM and the permissions that are controlled by Organizations. This means you can lock linked accounts down to do things like “only manage DNS Route53 resources” or “only manage S3 resources” using Organizations policies. Pretty nice way of segmenting off security and reducing the potential blast radius.
I am going to pick the most common denominator for my following examples… AWS CLI. Though I rarely use it for actual automation code, I figure most folks are familiar with it and it has a pretty intuitive syntax.
Step 1: Enable Organizations on your root account
Ensure that your AWS Profile environment variable is set to your desired root account AWS profile that has the necessary permissions to work with AWS Organizations. Alternatively, if you don’t want to use an environment variable, you can either ensure the default AWS Profile is the one which has permissions on your root account or you can specify the –profile argument when typing your AWS CLI commands. I’m going to use the AWS_DEFAULT_PROFILE environment variable in my examples here (output redacted).
> export AWS_DEFAULT_PROFILE=myrootacctadmin
This of course assumes you have a profile set up under your HOME dir in the .aws/credentials file named myrootacctadmin.
Now that we have our environment set we can get on with running the AWS CLI commands to create our organization.
Let’s be safe and make sure we don’t already have an organization created under our root account:
$ aws organizations list-roots
$ aws organizations list-roots
An error occurred (AWSOrganizationsNotInUseException) when calling the ListRoots operation: Your account is not a member of an organization.
As the error message indicates, this account is not currently a part of any organization and will need to be configured to use organizations if we want to use this as our master account and create linked accounts underneath it.
Indeed! our myrootacctadmin account is listed as the root (i.e. master) of our entire organization. This is exactly what we wanted. Now let’s see what AWS accounts are identified as part of this organization…
The actual creation of the account is not instantaneous, and the API responds to the create-account call before the new account creation is complete. While it is pretty quick to complete, unless we ensure that it is completed before performing any additional automation against it, we may receive an error from the API indicating the account is not yet ready. So prior to performing additional configuration on the new account, we need to ensure the State has reached SUCCEEDED. You will generally just loop until the State is equal to SUCCEEDED in your automation code before moving on to the next step. Also, it might be a good idea to catch failures (e.g. State == “FAILED”) and handle those gracefully. The account creation status can be achieved as follows:
Congratulations! You’ve just enabled AWS Organizations and created your first linked account!
At this point you should have a couple of emails from AWS in the inbox of the email address used to create the new account. They are standard boiler-plate emails. One of which is a “Welcome to Amazon Web Services” email and the other tells you that your account is ready and has some “getting started” type links.
Step 3: Reset New Linked Account Root Password
Now that your linked account has been created you will need to go through the AWS Reset Root Account Password workflow to make your new account accessible from either the AWS Web Console or the AWS APIs. The recommended approach here is to reset the root account password, enable MFA, Create an IAM user with Administrator privileges, store the root account secrets in a VERY secure place, and only use them as a last resort for account access.
Here’s a shortened URL that will take you directly to the root account password reset page: http://amzn.pw/45Nxe
Step 4: (Optionally) Create Organizational Units
Let’s go through a couple of examples of Organizational Units.
OU for only allowing S3 services
OU for only allowing services in us-west-2 and us-east-1 regions
“What if I want to bring my existing accounts under the umbrella of Organizations?” you ask
Good news! You can invite existing AWS accounts to join your organization. Using the API you can issue an invitation to an existing account by Account ID, Email, or Organization. For the sake of simplicity, let’s use an Account ID (222222222222) for the following example (again, using the root/master account AWS profile):
A couple of things of note – The handshake Id is what will be required to accept the invitation on the linked account side. Notice the difference between the RequestedTimestamp (epoch 1524610827.55) and the ExpirationTimestamp (epoch 1525906827.55). 1296000 seconds. Divide that by 86400 seconds in a day and we get 15 days.
At this point you have 15 days to issue an acceptance of the invitation (aka: handshake), from the target AWS account. You could simply log in to the AWS Web Console, navigate to Organizations, and accept the invitation, but that’s not what this article is about now is it? We’re talking automation here! And, as all good DevOpsers know, we utilize security entities that employ PoLP (Principal of Least Privilege) to perform process-specific tasks.
This means we aren’t going to do something ludicrous like adding AWS Access Keys to our root account login (please don’t ever do this). Nor are we going to create an IAM User with Administrator access for this very specific task. You can either create a User or a Role in the target account to accept the handshake, although, creating a Role will require you to assume that Role using STS, which might be overkill. On the other hand, you might use a lambda function to automate the handshake in which case you most certainly would utilize an IAM Role. Either way, the following IAM Policy Document will provide the User/Role with the required permissions to accept (or delete) the invitation:
For some IT organizations the cloud computing paradigm poses critical existential questions; How does my IT organization stay relevant in a cloud environment? How does IT still provide value to the business? What can be done to improve the business’ perception of IT’s contribution to the company? Without a clear approach to tackling these and other related questions, IT organizations stumble into a partially thought-out cloud computing strategy and miss out on capturing the short and long-term financial ROI and transformational benefits of a cloud-first strategy.
Several key concepts and principles from ITIL’s Service Strategy lifecycle stage lend themselves to defining and guiding a strategic approach to adopting and implementing a cloud-first strategy. In this article, we’ll highlight and define some of these key principles and outline a tactical approach to implementing a cloud-first strategy.
One of the key concepts leveraged in ITIL’s Service Strategy is the Run-Grow-Transform framework from Gartner. From an executive management perspective, the IT organization’s contribution to the company’s goals and objectives can be framed along the Run-Grow-Transform model – specifically around how IT can help the company (1) Run-The-Business, (2) Grow-The-Business, and (3) Transform-The-Business.
The CIO’s value is both objectively and subjectively measured by answering:
1 – How can IT reduce the cost of current IT operations, thus improving the bottom line?
2 – How can IT help the business expand and gain greater market share with our current business offerings?
3 – How can IT empower the business to venture out into new opportunities and/or develop new competitive business advantage?
We’ll take a close look at each model area, highlight key characteristics, and give examples of how a cloud-first policy can enable a CIO to contribute to the companies’ goals and objectives and not only remain relevant to the organization but enable business innovation.
Run-the-Business and Cloud-First Strategy
Run the Business (RTB) is about supporting essential business operations and processes. This usually translates to typical IT services and operations such as email-messaging systems, HR services, Payroll and Financial systems. The core functionality these IT services provide is necessary and essential but not differentiating to the business. These are generally viewed as basic core commodity services, required IT costs for keeping the business operational.
The CIO’s objective is to minimize the cost of RTB activities without any comprise to the quality of service. A cloud-first policy can achieve these outcomes. It can reduce costs by moving low value-add IT activities (sometimes referred to as ‘non-differentiating work’) to a cloud provider that excels at performing the same work with hyper efficiency. Add in the ability of a cloud provider to leverage economies of scale and you have a source of reliable, highly cost-optimized IT services that cannot be matched by any traditional data center or hosting provider (see AWS’s James Hamilton discuss data center architecture at scale). Case studies from GE, Covanta, and Conde Nast bare out the benefit of moving to AWS and enabling their respective CIOs to improve their business’ bottom line.
Grow-the-Business and Cloud First Strategy
Grow the Business (GTB) activities are marked by enabling the business to successfully increase market share and overall revenue in existing markets. If a company doubles its customer base, then the IT organization responds with timely and flexible capacity to support such growth. Generally, an increase in GTB spending should be tied to an increase in business revenue.
Cloud computing providers, such as AWS, are uniquely capable to support GTB initiatives. AWS’ rapid elasticity drastically alters the traditional management of IT demand and capacity. A classic case in point is the “Black Friday” phenomena. If the IT organization does not have sufficient IT resources to accommodate the projected increase in business volume, then the company risks missing out on revenue capture and may experience a negative brand impact. If the IT organization overprovisions its IT resources, then unnecessary costs are incurred and it adversely affects the company’s profits. Other similar business phenomena include “Cyber Monday,” Super Bowl Ads, and product launches. Without a highly available and elastic cloud computing environment, IT will struggle to support GTB activities (see AWS whitepaper “Infrastructure Event Readiness” for a similar perspective).
A cloud’s elasticity solves both ends of the spectrum scenarios by not only being able to ramp up quickly in response to increased business demand, but also scale down when demand subsides. Additionally, AWS’ pay-for-what-you-use model is a powerful differentiating feature. Some key uses cases include Crate & Barrel and Coca-Cola. Through a cloud-first strategy, a CIO is able to respond to GTB initiatives and activities in a cost-optimized manner.
Transform-the-Business and Cloud Computing
Transform the Business (TTB) represents opportunities for a company to make high risk but high reward investments. This usually entails moving into a new market segment with a new business or product offering. Innovation is the key success factor in TTB initiatives. Traditionally this is high risk to the business because of the upfront investment required to support new business initiatives. But in order to innovate, IT and business leaders need to experiment, to prototype and test new ideas.
With a cloud-first policy, the IT organization can mitigate the high-risk investment, yet still obtain the high rewards by enabling a ‘fail early, fail fast’ strategy in a cloud environment. Boxever is a case study in fail fast prototyping. Alan Giles, CTO of Boxever, credits AWS with the ability to know within days “if our design and assumptions [are] valid. The time and cost savings of this approach are nearly incalculable, but are definitely significant in terms of time to market, resourcing, and cash flow.” This cloud-based fail-fast approach can be applied to all market-segments, including government agencies. The hidden value in a cloud-based fail fast strategy is that failure is affordable and OK, making it easier to experiment and innovate. As Richard Harshman, Head of ASEAN for Amazon Web Services, puts it, “Don’t be afraid to experiment. The cloud allows you to fail fast and fail cheap. If and when you succeed, it allows you to scale infinitely and go global in minutes”.
So what does a cloud-first strategy look like?
While this is a rudimentary, back-of-the-envelope style outline, it provides a high-level, practical methodology for implementing a cloud-first based policy.
For RTB initiatives: Move undifferentiated shared services and supporting services to the cloud, either through Infrastructure-as-a-Service (IaaS) or Software-as-a-Service (SaaS) based solutions.
For GTB initiatives: Move customer-facing services to the cloud to leverage dynamic supply and demand capacity.
For TTB initiatives: Set up and teardown cloud environments to test and prototype new ideas and business offerings at minimal cost.
In addition to the Run-Grow-Transform framework, the ITIL Service Strategy lifecycle stage provides additional guidance from its Service Portfolio Management, Demand Management, and Financial Management process domains that can be leveraged to guide a cloud-first based strategy. These principles, coupled with other related guidance such as AWS Cloud Adoption Framework, provide a meaningful blueprint for IT organizations to quickly embrace a cloud-first strategy in a structured and methodical manner.
By aggressively embracing a cloud-first strategy, CIOs can demonstrate their business relevance through RTB and GTB initiatives. Through TTB initiatives IT can facilitate business innovation and transformation, yielding greater value to their customers. We are here to help our customers, so if you need help developing a cloud-first strategy, contact us here.
Covanta Energy and 2nd Watch talk with SiliconANGLE Media at AWS re:Invent 2016. Find out why Covanta decided to go all-in on Amazon Web Services and how 2nd Watch helped them make the transition in only 16 weeks.
Every enterprise knows by now that it can save money by simply lifting and shifting workloads to the cloud, but many are missing the larger opportunity to also make money by moving. While quick costs savings are good for the bottom line, they do little to move the top line numbers. To achieve both savings and earnings, corporate thinking about technologies must change in order to enable faster processes leveraged enterprise-wide.
In this AWS re:Invent 2016 breakout session we explored multiple customer success stories where the customers have evolved from leveraging basic compute and storage products (EC2 and S3) to integrating new services into operations by leveraging Lambda, DynamoDB, CodeDeploy, etc. Once this is achieved, enterprises are enabled to manage and deploy code rapidly in a programmatic and elastic secure network, ensuring governance and security standards across the globe. We also looked at the migration process trusted by hundreds of clients as well as how to cope with the process and people components that are so important to enable agility, while focusing heavily on the technology.
Dive deep into the technology that allows the world’s largest beverage manufacturer to manage hundreds of AWS Accounts, hundreds of workloads, thousands of instances, and hundreds of business partners around the globe. The company’s Configuration Management System has Puppet at the core and relies on over a dozen core and emerging AWS products across accounts, availability zones and regions. This complex and globally-available system ensures all of company’s workloads in AWS meet corporate policies but also allows for rapid scale of both consumer and enterprise workloads.