What does that even mean?
What I am talking about here is the automation of the following:
- AWS Linked Account Creation (the creation of secondary accounts under a single master account)
- Account Initialization and Configuration
- Continuous Compliance
It is commonplace for organizations to manage their AWS assets/resources across a wide range of different AWS accounts. This is nothing new, and we’ve seen some of our customers scale this into the hundreds. This has some pretty obvious implications from an operational, security, and accounting standpoint.
AWS Linked Account Creation
First there is the creation of the linked account itself, which can be a time consuming and arduous (if at least only one-time) process. Even if you have a rigid process for this, it is inevitable that some human error will introduce some drift or inconsistency at some point in time. It’s not a matter of if, just a matter of when. There is also the tracking of the root account credentials and everything that goes along with that. Looks like another process that is ripe for some sweet, sweet automation. Until very recently there was no API available for this, but AWS released a beta API to create linked accounts around a year ago that has recently gone to general availability. So score one for automation!
Account Initialization and Configuration
Now you’ve got your shiny new linked account. but for every account you manage you have to ensure that all of your base settings and resources are properly set up (e.g. AWS CloudTrail, AWS Config, IAM password policies, SAML Federation with your central AD, on and on). Not only set up, but set up in a consistent way so that you don’t have drift between accounts. Ok, so you could put together a nice CloudFormation template (CFT), Manage it in Terraform, or possibly just a homegrown set of scripts (bash+AWSCL, python, ruby, etc.). Those are all a great start, but you still need to be able to audit those resources to ensure they are what they are supposed to be. Also, you need to support the ability to push changes to those resources.
A few examples…
- IT AD Admin: The ADFS servers are updating their XML metadata doc, so we need you to go update the ADFS SAML Federation for our 37 linked accounts.
- IT Security Admin: We need to actively manage our set of IAM Roles that map to ADFS groups and their respective permissions on a regular and ongoing basis. How are we going to quickly and consistently do that across our 37 linked accounts?
- IT Security Admin: Hey, our email address for AWS CloudTrail notifications (SNS subscription) needs to be updated to use a new email address. I need you to get that updated on all of our 37 linked accounts ASAP!
And on and on it goes. Suffice it to say, there is a never-ending need to be able to make modifications across one, several, or all of your linked AWS accounts. You need an approach for handling what would normally be an unwieldy and tedious bit of guaranteed work. The more human intervention required to manage these things. the more likely we are to see inconsistencies, errors, and misses. And we’ve seen enough cautionary tales on failed security practices in the news in the past few years that I don’t need to stress the importance of getting this stuff right. Every time. All the time.
Once you have these things configured you really need a way to continually audit those resources and settings on an ongoing basis and ideally be able to automatically respond to drift events. This one is a bit trickier than the others because, while you can use tools like CloudFormation or Terraform to set up your initial settings and configurations. the resources they create can be modified afterwards outside of the tool they were created/configured with in the first place. Tools like AWS CloudTrail and AWS Config provide valuable tracking information for helping audit resources but alone don’t solve this puzzle. Especially if you are talking about managing this across a few dozen accounts. Something more robust must be employed to collect that data and do something intelligent with it.
How do I escape this sort of multi-account management nightmare you are describing?!!
I’ll be going into a deeper dive into this in my next blog, but here is a high-level overview of the architecture and accompanying tools and technologies you can put in place to pull it off.
AWS Linked Account Creation
With the somewhat recent release of the organizations API this has become a reality. As per the CreateAccount API documentation. you will need to ensure that AWS Organizations is enabled in the master account. But fear not! You probably already are. Specifically if you are already running multiple accounts under a master account, then you most certainly are. I won’t bore you with details, and AWS already has a very nice article detailing the process required to use organizations and the API to automate account creation. Pretty spiffy!
Account Initialization and Configuration
Once you have created the linked account using the CreateAccount API the next step is to apply any and all org-specific initialization and configuration to the new account to get it all ready for action. This step and the Continuous Compliance step can also be managed by the same tool if that is how you decide to architect it.
The key is that this is where we initialize the account with its base configuration. Whether you do that with custom scripting/code, CloudFormation, Terraform, or some amalgamation of those and/or other tools/services is not of paramount importance. What is important, is having a way to track those resource and their state. Make sure you keep that in mind when architecting a solution. One nice thing about CloudFormation is that the state tracking is built right into the service itself. You can easily list all resources within a CloudFormation stack and you can include stack Outputs to track any custom data you may generate or derive during the CFT stack launch.
You could do something similar with Terraform through the use of their state files, but it (non-enterprise Terraform) lacks the same API queryability that CloudFormation has built in. Also, it is less transparent to the casual onlooker in the AWS console where resources are originating from. Of course, once you query the resources you will still require a method for determining and tracking the state of those resources. But now we’re getting ahead of ourselves.
This is going to require a service that will allow you to: – Track the state of resources we care about – Audit the state of those resources automatically on an ongoing basis – Report on any configuration drift – Optionally automatically remediate drift.
Using the AWS CloudTrail and AWS Config services gives us the ability to track changes real-time and tie those changes to a specific user/role. But what about services that are not yet supported by AWS Config? In that case you may want to (as we have done) build a suite of services to handle these tasks. Resources and configurations are registered with a service that tracks their known-desired state. Another service is responsible for querying the current state of those items and raising a flag if there is drift. Potentially another service could report on those flagged out-of-compliance resources/settings. Optionally you could deploy a service that remediates drift in your desired configuration state on all out-of-compliance resources, or possibly just a subset.
At 2nd Watch we’ve actually architected and built out our own Managed Cloud specific implementation of Automated Account Creation and Continuous Compliance. If you would rather focus your energy on your business’s core competencies and not on building foundation cloud management tooling, why not come on board and let us empower you to deliver your product and drive shareholder value in the most secure, stable, and cost-effective way possible? We’ve got the tools and the people to make it happen! Contact us to learn more.
–Ryan Kennedy, Principal Cloud Automation Architect, 2nd Watch
-Craig Monson, Sr Automation Architect
-Lars Cromley, Director of Engineering
As an AWS Premier Partner and audited, authorized APN Managed Service Provider (MSP), 2nd Watch offers comprehensive services to help customers accelerate their journey to the cloud. For many of our customer we not only provide robust Managed Cloud Services, we also resell Amazon products and services. What are the advantages for customers who purchase from a value-added reseller? Why would a customer do this? Who owns the account? These are all great questions and the subject of today’s blog post.
I am going to take these questions in reverse order and deal with the ownership issue first, as it is the most commonly misconstrued part of the arrangement. Let me be clear – when 2nd Watch resells Amazon Web Services, our customer “owns” the account. At 2nd Watch we work hard every day to earn our customers’ trust and confidence and thereby, their business. Our pricing model for Managed Cloud Services is designed to leverage the advantages of cloud computing’s consumption model – pay for what you use. 2nd Watch customers who purchase AWS through us have the right to move their account to another MSP or purchase direct from AWS if they are unhappy with our services.
I put the word “own” in quotes above because I think it is worth digressing for a minute on how different audiences interpret that word. Some people see the ownership issue as a vendor lock-in issue, some as an intellectual property concern and still others a liability and security requirement. For all of these reasons it is important we are specific and precise with our language.
With 2nd Watch’s Managed Cloud Services consumption model you are not locked-in to 2nd Watch as your reseller or MSP. AWS Accounts and usage purchased through us belong to the customer, not 2nd Watch, and therefore any intellectual property contained therein is the responsibility and property of the customer. Additionally, as the account owner, a customer’s AWS accounts use a shared responsibility model. With regards to liability and security, however, our role as an MSP can be a major benefit.
Often MSP’s “govern or manage” the IAM credentials for an AWS account to ensure consistency, security and governance. I use the words govern or manage and not “own” as I want to be clear that the customer still has the right to take back the credentials and overall responsibility for managing each account, which is the opposite of lock-in. So why would a customer want their MSP to manage their credentials? The reason is pretty simple; similar to a managed data center or colocation facility, you own the infrastructure, but you hire experts to manage the day-to-day management for increased limits of liability, security and enhanced SLA’s.
Simply put, if you, as a customer, want your MSP to carry the responsibility for your AWS account and provide service level agreements (complete with financial repercussions), you are going to want to make sure administrative access to the environment is limited with regards to who can make changes that may impact stability or performance. As a 2nd Watch Managed Cloud Services customer, allowing us to manage IAM credentials also comes with the benefit of our secure SOC 2 Type 2 (audited) compliant systems and processes. Often our security controls exceed the capabilities of our customers.
Also worth noting – as we on-board a Managed Cloud Services customer, we often will audit their environment and provide best practice recommendations. These recommendations are aligned with the excellent AWS Well Architected framework and help customers achieve greater stability, performance, security and cost optimization. Our customers have the option of completing the remediation or having 2nd Watch perform the remediation. Implementing best practices for managing user access along with leveraging cutting edge technology results in a streamlined journey to the cloud.
So now we have addressed the question of who owns the account, but we haven’t addressed why a customer would want to procure AWS through a partner. First, see my earlier blog post regarding Cloud Cost Complexity for some background. Second, buying AWS through 2nd Watch comes with several immediate advantages:
- All services are provided at AWS market rates or better.
- Pass through all AWS volume tier discounts and pricing
- Pass through AWS Enterprise Agreement terms, if one exists
- Solution based and enhanced SLA’s (above and beyond what AWS provides) shaped around your business requirements
- Familiarity with your account – our (2) U.S. based NOC’s are staffed 24x7x365 and have access to a comprehensive history of your account and governance policies.
- Access to Enterprise class support including 2nd Watch’s multiple dedicated AWS Technical Account Managers with Managed Cloud Services agreements
- Consolidate usage across many AWS accounts (see AWS volume discount tiers above)
- Consolidated billing for both Managed Cloud Services and AWS Capacity
- Access to our Cloud Management Platform, a web-based console, greatly simplifies the management and analysis of AWS usage
- Ability to support complex show-back or charge-back bills for different business units or departments as well as enterprise-wide roll-ups for a global view
- Ability to allocate Volume and Reserved Instance discounts to business units per your requirements
- Set budgets with alerts, trend analysis, tag reporting, etc.
- Ability to provide Reserved Instance recommendations and management services
- Helps improve utilization and prevent spoilage
- You can select the level of services for Managed Cloud Services on any or all accounts – you can consolidate your purchasing without requiring services you don’t need.
- Assistance with AWS Account provisioning and governance – we adhere to your corporate standards (and make pro-active recommendations).
In short, buying your AWS capacity through 2nd Watch as your MSP is an excellent value that will help you accelerate your cloud adoption. We provide the best of AWS with our own services layered on top to enhance the overall offering. Please contact us for more information about our Managed Cloud Services including Managed AWS Capacity.
-Marc Kagan, Director, Account Management
When people first hear about the cloud, they typically envision some nebulous server in the sky. Moving apps to the cloud should be a piece of cake, they think. Simply pick them up, stick them in the cloud, and you’re done.
Reality, of course, is quite different. True, for simple, monolithic applications, you could provision a single cloud instance and simply port the code over.
The problem is, today’s applications are far from simple and rarely monolithic. Even a simple web app has multiple pieces, ranging from front-end web server code interacting with application code on the middle tier, which in turn talks to the database underneath.
However, in the enterprise context, even these multi-tier web apps are more the exception than the rule. Older enterprise applications like ERP run on multiple servers, leveraging various data sources and user interfaces, communicating via some type of middleware.
Migrating such an application to the cloud is a multifaceted, complex task that goes well beyond picking it up and putting it in the cloud. In practice, some components typically remain on premise while others may move to the cloud, creating a hybrid cloud scenario.
Furthermore, quite often developers must rewrite those elements that move to the cloud in order to leverage its advantages. After all, the cloud promises to provide horizontal scalability, elasticity, and automated recovery from failure, among other benefits. It’s essential to architect and build applications appropriately to take advantage of these characteristics.
However, not all enterprise cloud challenges necessarily involve migrating older applications to cloud environments. For many organizations, digital transformation is the driving force, as customer preferences and behavior drive their technology decisions – and thus digital often begins with the customer interface.
When digital is the priority, enterprises cannot simply build a web site and call it a day, as they may have done in the 1990s. Even adding mobile interfaces doesn’t address customer digital demands. Instead, digital represents an end-to-end rethink of what it means to put an application into the hands of customers.
Today’s modern digital application typically includes multiple third-party applications, from the widgets, plugins, and tags that all modern enterprise web pages include, to the diversity of third-party SaaS cloud apps that support the fabric of modern IT.
With this dynamic complexity of today’s applications, the boundaries of the cloud itself are becoming unclear. Code may change at any time. And there is no central, automated command and control that encompasses the full breadth of such applications.
Instead, management of modern cloud-based, digital applications involves a never-ending, adaptive approach to management that maintains the performance and security of these complex enterprise applications.
Without such proactive, adaptive management, the customer experience will suffer – and with it the bottom line. Furthermore, security and compliance breaches become increasingly likely as the complexity of these applications grows.
It’s easy to spot the irony here. The cloud promised greater automation of the operational environment, and with increased automation we expected simpler management. But instead, complexity exploded, thus leading to the need for more sophisticated, adaptive management. But in the end, we’re able to deliver greater customer value – as long as we properly manage today’s end-to-end, cloud-centric digital applications.
-Jason Bloomberg, President, Intellyx
Copyright © Intellyx LLC. 2nd Watch is an Intellyx client. Intellyx retains final editorial control of this article.
With the New Year comes the resolutions. When the clock struck midnight on January 1st, 2015 many people turned the page on 2014 and made a promise to do an act of self-improvement. Often times it’s eating healthier or going to the gym more regularly. With the New Year, I thought I could put a spin on a typical New Year’s Resolution and make it about AWS.
How could you improve on your AWS environment? Without getting too overzealous, let’s focus on the fundamental AWS network infrastructure, specifically an AWS Virtual Private Cloud (VPC). An AWS VPC is a logically isolated, user controlled, piece of the AWS Cloud where you can launch and use other AWS resources. You can think of it as your own slice of AWS network infrastructure that you can fully customize and tailor to your needs. So let’s talk about VPCs and how you can improve on yours.
- Make sure you’re using VPCs! The simple act of implementing a VPC can put you way ahead of the game. VPCs provide a ton of customization options from defining your own VPC size via IP addressing; to controlling subnets, route tables, and gateways for controlling network flow between your resources; to even defining fine-grained security using security groups and network ACLs. With VPCs you can control things that simply can’t be done when using EC2-Classic.
- Are you using multiple Availability Zones (AZs)? An AZ is a distinct isolated location engineered to be inaccessible from failures of other AZs. Make sure you take advantage of using multiple AZs with your VPC. Often time instances are just launched into a VPC with no rhyme or reason. It is great practice to use the low-latency nature and engineered isolation of AZs to facilitate high availability or disaster recovery scenarios.
- Are you using VPC security groups? “Of course I am.” Are you using network ACLs? “I know they are available, but I don’t use them.” Are you using AWS Identity and Access Management (IAM) to secure access to your VPCs? “Huh, what’s an IAM?!” Don’t fret, most environments don’t take advantage of all the tools available for securing a VPC, however now is the time reevaluate your VPC and see if you can or even should use these security options. Security groups are ingress and egress firewall rules you place on individual AWS resources in your VPC and one of the fundamental building blocks of an environment. Now may be a good time to audit the security groups to make sure you’re using the principle of least privilege, or not allowing any access or rules that are not absolutely needed. Network ACLs work at the subnet level and may be useful in some cases. In larger environments IAM may be a good idea if you want more control of how resources interact with your VPC. In any case there is never a bad time to reevaluate security of your environment, particularly your VPC.
- Clean up your VPC! One of the most common issues in AWS environments are resources that are not being used. Now may be a good time to audit your VPC and take note of what instances you have out there and make sure you don’t have resources racking up unnecessary charges. It’s a good idea to account for all instances, leftover EBS volumes, and even clean up old AMIs that may be sitting in your account. There are also things like extra EIPs, security groups, and subnets that can be cleaned up. One great tool to use would be AWS Trusted Advisor. Per the AWS service page, “Trusted Advisor inspects your AWS environment and finds opportunities to save money, improve system performance and reliability, or help close security gaps.”
- Bring your VPC home. AWS, being a public cloud provider, allows you to create VPCs that are isolated from everything, including your on-premise LAN or datacenter. Because of this isolation all network activity between the user and their VPC happens over the internet. One of the great things about VPCs are the many types of connectivity options they provide. Now is the time to reevelautate how you use VPCs in conjunction with your local LAN environment. Maybe it is time to setup a VPN and turn your environment into a hybrid cloud and physical environment allowing all communication to pass over a private network. You can even take it one step further by incorporating AWS Direct Connect, a service that allows you to establish private connectivity between AWS and your datacenter, office, or colocation environment. This can help reduce your network costs, increase bandwidth throughput, and provide a more consistent overall network experience.
These are just a few things you can do when reevaluating your AWS VPC for the New Year. By following these guidelines you can gain efficiencies in your environment you didn’t have before and can rest assured your environment is in the best shape possible for all your new AWS goals of 2015.
-Derek Baltazar, Senior Cloud Engineer
With every new year comes a new beginning. The holidays give us a chance to reflect on our achievements from the previous year, as well as give us a benchmark for what we want to accomplish in the following year. For most individuals, weight loss, quitting a bad habit, or even saving money top the list for resolutions. For companies, the goals are a little bit more straight forward and quantitative. Revenue goals are set, quotas are established, and business objectives are defined. The success of a company is entrenched in these goals and will determine; positively or negatively, how a company should be valued.
Today’s businesses are becoming even more complex than ever, and we can thank new technologies, emerging markets, and the ease of globalization for helping drive these new trends. One of the most impactful and fast adopting technologies that is helping businesses in 2015 is the public cloud.
What’s amazing, though, is that how businesses are planning for the adoption of public cloud is still unknown to most. Common questions such as “Is my team staffed accordingly to handle this technology change?” or “How do I know if I’m architecting correctly for my business?” are coming up often. These questions are extremely common with new technologies, but it doesn’t have to be difficult if you take these simple steps.
- Plan Ahead: Guide your leadership to see that now is the time to review the current technology inventory being utilized by the company and strategically outline what it will take to help the company become more agile, cost effective, and leverage the most robust technologies in the New Year.
- Over communicate: By talking with all the necessary parties, you will turn an unknown topic into water cooler conversation. Involve as many people as possible and ask for feedback constantly. This way, if there is anyone that is not on-board with this technology shift, you will have advocates across the organization helping your cause.
- Track your progress: Keep an active log of your adoption process, pitfalls, to-dos, and active contributors. Establish a weekly cadence to review past success and upcoming agendas. Focus on small wins, and after a while you will see amazing results for your achievements.
- Handle problems with positivity: Technology changes are never easy for an organization, but take each problem as an opportunity to learn. If something isn’t working, it’s probably for good reason. Review what went wrong, learn from the mistakes, and make sure they don’t repeat themselves. Each problem should be welcomed, addressed and reviewed accordingly.
- Stay diligent: Rome wasn’t built in a day, and neither will your new public-cloud datacenter be. Review your plan, do constant check points against your cloud strategy, follow your roadmap and address problems as soon as they come up. By staying focused and tenacious you will be successful in your endeavors.
Happy 2015, and let’s make it a great year.
-Blake Diers, Alliance Manager
Public cloud is growing. Private cloud is not. Big Data and Internet of Things is hot. Virtualization is not. These are just a few of the findings of the 2nd Watch enterprise cloud trends survey, just released. More than 400 IT managers and executives from large companies participated, and 64% of them said that they will spend at least 15% more in 2015 on public cloud infrastructure. All signs point to the fact that the public cloud is continuing to grow. Q3 earnings statements from both Amazon and Microsoft for their respective cloud services, AWS and Azure, were robust.
Companies are going to need some help though. As always, IT skills are at a premium. In our own conversations with customers, supported by the survey, CIOs and CTOs are looking for bright engineers who know how to manage and optimize workloads in the cloud. As well, the ability to natively design applications for the public cloud will be a critical competitive advantage in the coming year. The opportunity is there for any company – regardless of your size or industry. Large consumer goods are innovating with mobile apps that require not just savvy developers, but an IT organization that can leverage public cloud services to mash-up data and deliver cool new services that drive brand loyalty.
The trick is that each provider, such as AWS, operates differently. CIOs need specialists, and when they’re hard to find, using third-party experts can reduce risks and deliver faster ROI. 2nd Watch has years of diversified experience across many different project types, regimented training and continued learning programs for their employees that can supplement your IT staff.
A parallel challenge is that few large companies are ready to migrate their entire data center to the cloud just yet. With legacy applications and customer requirements, large companies typically still require or desire some systems to be hosted on their own data centers. Thus, cloud providers and technologies that are able to integrate data centers will see ample demand next year. Hybrid cloud terminology will still be popular with enterprise IT in 2015, according to our survey. However, this is not an end state but a state of transition in maintaining physical data centers while they migrate to public cloud.
The recent news that the AWS OpsWorks application management service (based on Chef) is now available for managing public cloud and on-premise servers is one sign of the growing flexibility that CIOs will have in managing workloads across their environments. Companies want to see more industrial-strength management tools that can bridge internal data centers and public cloud data centers and deliver a unified picture of the entire infrastructure.
IT executives are also looking for more help on the security front. The major public cloud providers are already investing heavily in this area, particularly AWS, but startups will play a significant role in bringing new endpoint security solutions to market in 2015. Survey participants said that security tools and services is the most underinvested category by cloud technology firms. I believe they will say differently in a year’s time. Software companies also have opportunities in modern IT management, with many companies demanding more automated options for performance monitoring, system management and change management in the cloud.
If you are interested in learning more about the best-in-class cloud management tools that are available today, schedule your free 2nd Watch Workshop now*.
*Applies to Enterprise Customers new to 2nd Watch with a specific use case to build the workshop around.
Download the full Infographic for more trends to watch for in 2015.
Read more on Enterprise Cloud Trends for 2015 in 2nd Watch CTO, Kris Bliesner’s, article in Data Center Knowledge – Planning for the Future: Enterprise Cloud Trends in 2015.
-Jeff Aden, EVP of Marketing & Strategic Business Development