1-888-317-7920 info@2ndwatch.com

AWS a Leader Again in 2016 Gartner Magic Quadrant

Gartner’s 2016 Magic Quadrant for Cloud Infrastructure as a Service, Worldwide was released today, evaluating 10 different vendors for ‘completeness of vision’ and ‘ability to execute.’ Although AWS continues to face more competition, we know it remains a market share leader in the industry. Gartner has placed AWS as having both the furthest completeness of vision and the highest ability to execute.

Because the market for cloud IaaS is in a state of upheaval, with many service providers shifting their strategies after failing to gain enough market traction, 2nd Watch recommends utilizing Gartner’s Magic Quadrant to help eliminate the confusion around the various providers in the Infrastructure as a Service sector.

Access the Gartner Report

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

-Nicole Maus, Marketing Manager

Facebooktwittergoogle_pluslinkedinmailrss

And the top AWS products of Q1 2016 are…

AWS is an innovation lab. The world’s top cloud provider releases hundreds of updates and dozens of major services every year. So, which products are companies loving right now?

We analyzed data from our customers, across a combined 100,000+ instances running monthly, for Q1 of 2016. The most popular AWS products, represented by the percentage of 2nd Watch customers deploying them in the first quarter, include Amazon S3 for storage and Data Transfer (100% each), EC2 (99%), SNS or Simple Notification Service (89%) and Key Management Service for encryption (87%). These services are standard in most AWS deployments, and have been consistent in the last year or so – no surprises here.

Perhaps less predictable was the use of other AWS products, such as Redshift, the data warehouse service introduced in 2012 as a low-cost alternative to systems from HP, Oracle and IBM. The fact that 17% of our customers are using Redshift demonstrates how quickly innovative cloud technology can carve a strong position in a legacy software market. Enterprises are starting to move away from legacy systems to Redshift because it can handle massive quantities of data with exceptional response times.

Other relatively new AWS products making rapid progress with AWS users include the high-performing NoSQL database service Dynamo DB (27%), Lambda, an automated compute management platform (21%) and Workspaces, a secure virtual desktop service (19%).

Just three years ago, enterprises were primarily using the core compute and storage services on AWS. As companies become more comfortable moving business-critical IT assets into the cloud, they’re more likely to leverage the broader AWS portfolio. We expect growth in areas such as database, desktop and management tools to continue in the coming months.

Download the Top 30 AWS Products infographic to find out which others made the list.

-Jeff Aden, EVP Strategic Business Development & Marketing

Facebooktwittergoogle_pluslinkedinmailrss

Database Migration Service for RDS

For database administrators and engineers, migrating a database can be a major headache. It’s such a headache that it actually prohibits some teams from migrating to AWS’ Relational Database Service (RDS), even though doing so would save them time and money in the long run.

Imagine you’re a DBA for Small Business Y and you want to manage your data in RDS, but you have three terabytes of data with a ton of tables, foreign keys and many dependencies. Back in the day, migrating your SQL Server database to RDS might have looked something like this:

  1. Coordinate with Product Leads to find a time when your business can handle up to a 24-hour outage of source database.
  2. Dump all the existing data into a backup.
  3. Restore the data on an intermediary EC2 SQL Server instance.
  4. Connect to the new RDS instance.
  5. Generate metadata script from the source or intermediary instance.
  6. Execute metadata script on target RDS instance.
  7. Migrate the data to the RDS instance using SQL Server’s Import tool.
  8. Optional: Encounter complications such as import failures, loss of metadata integrity, loss of data, and extremely slow import speeds.
  9. Cry a lot and then sleep for days.

Enter AWS Database Migration Service. This new tool from AWS allows DBAs to complete migrations to RDS, or even to a database instance on EC2, with minimal steps and minimal downtime to the source database. What does that mean for you? No 2AM migrations and no tears.

Migrations with AWS DMS have three simple steps:

  1. Provision a replication instance
  2. Define source and target endpoints
  3. Create one or more tasks to migrate data between source and target

The service automates table-by-table migration into your target database without having to use any backups, dump files, or manually administering an intermediary database server. In addition, you can use the service to continue replicating changes from the source environment until you are ready to cutover. That means your application’s downtime will be just a few minutes instead of several hours or even days.

Another great feature of DMS is that you can watch the progress of each table’s migration in the AWS console. The dashboard allows users to see, in real time, how the migration is going and see exactly where any breakages occur.

Database Migration Svc 1

 

Database Migration Svc 2

If you are planning to use the AWS Database Migration Service soon, here are a few tips to streamline your process:

  • In your target instance, set up your table structure prior to migrating the data. The service will not automatically set up your foreign keys and indexes. Set up your metadata from AWS schema conversion tool (or simple mysqldump). Make sure to use truncate option during schema import so that the tables you create aren’t wiped.
  • If you think you may have extra-large LOBs, use Full LOB Mode to avoid data truncation.
  • Additional best practices for using DMS can be found here.

Enjoy!

-Trish Clark, Oracle DBA

Facebooktwittergoogle_pluslinkedinmailrss

Enter AWS Lambda

AWS has managed to transform the traditional datacenter model into a feature-rich platform and has been constantly adding new services to meet business and consumer needs. As virtualization has changed the way infrastructure is now built and managed, the ‘serverless’ execution model has become a viable method of reducing costs and simplifying management. A few years ago, the infrastructure required to host a typical application or service required the setup and management of physical hardware, operating systems and application code. AWS’ offerings have grown to include services such as RDS, SES, DynamoDB and ElastiCache which provide a subset of functionality without the requirement of having to manage the entire underlying infrastructure on which those services actually run.

Enter AWS Lambda.

Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own back-end services that operate at AWS scale, performance, and security.

In a nutshell, Lambda provides a service that executes custom code without having to manage the underlying infrastructure on which that code is executed. The administration of the underlying compute resources, including server and operating system maintenance, capacity provisioning, automatic scaling, code monitoring, logging, and code and security patch deployment are eliminated. With AWS Lambda, you pay only for what you use, and are charged based on the number of requests for your functions and the time your code executes. This allows you to eliminate the overhead of paying for instances (by the hour or reserved) and their administration. Why build an entire house if all you need is a kitchen so you can cook dinner? In addition, the service also automatically scales to meet capacity requirements. Again, less complexity and overhead than managing EC2 Auto Scale Groups.

Lambda’s compute service currently supports Node.js (JavaScript), Python, and Java (Java 8 compatible). Your code can include existing libraries, even native ones. Code is executed as what is referred to as a Function.

Here’s AWS’ Jeff Barr’s simple description of the service and how it works:

You upload your code and then specify context information to AWS Lambda to create a function. The context information specifies the execution environment (language, memory requirements, a timeout period, and IAM role) and also points to the function you’d like to invoke within your code. The code and the metadata are durably stored in AWS and can later be referred to by name or by ARN (Amazon Resource Name). You can also include any necessary third-party libraries in the upload (which takes the form of a single ZIP file per function).

After uploading, you associate your function with specific AWS resources (a particular S3 bucket, DynamoDB table, or Kinesis stream). Lambda will then arrange to route events (generally signifying that the resource has changed) to your function.

When a resource changes, Lambda will execute any functions that are associated with it. It will launch and manage compute resources as needed in order to keep up with incoming requests. You don’t need to worry about this; Lambda will manage the resources for you and will shut them down if they are no longer needed.

Lambda Functions can be invoked by triggers from changes in state or data from services such as S3, DynamoDB, Kinesis, SNS and CloudTrail, after which, the output can then be sent back to those same services (though it does not have to be). It handles listening, polling, queuing and auto-scaling and spins up as many workers as needed match the rate change of source data.

A few common use cases include:

  • S3 + Lambda (Dynamic data ingestion) – Image re-sizing, Video Transcoding, Indexing, Log Processing
  • Direct Call + Lambda (Serverless backend) – Microservices, Mobile backends, IoT backends
  • Kinesis + Lambda (Live Stream Processing) – Transaction Processing, Stream analysis, Telemetry and Metering
  • SNS + Lambda (Custom Messages) – Automating alarm responses, IT Auditing, Text to Email Push

Additionally, data can be sent in parallel to separate Functions to decrease the amount of time required for data that must be processed or manipulated multiple times. This could theoretically be used to perform real-time analytics and data aggregation from a source such as Kinesis.

Function overview

  • Memory is specified ranging from 128MB to 1GB, in 64MB increments. Disk, network and compute resources are provisioned based on the memory footprint. Lambda tells you how much memory is used, so this setting can be tuned.
  • They can be invoked on-demand via the CLI and AWS Console, or subscribed to one or multiple event sources (e.g. S3, SNS). And you can reuse the same Function for those event sources.
  • Granular permissions can be applied via IAM such as IAM Roles. At a minimum, logging to CloudWatch is recommended.
  • Limits to resource allocation such as 512MB /tmp space, 1024 file descriptors and 50MB deployment package size can be found at http://docs.aws.amazon.com/lambda/la/dg/limits.html.
  • Multiple deployment options exist including direct authoring via the AWS Console, packaging code as a zip, and 3rd party plugins (Grunt, Jenkins).
  • Stateless data means depending on another service such as S3 or DynamoDB to retain persistence.
  • Monitoring and debugging can be accomplished using the Console Dashboard to view CloudWatch metrics such as requests, errors, latency and throttling.

Invoking Lambda functions can be achieved using Push or Pull methods. In the event of a Push from S3 or SNS, retries occur automatically 3 times and is unordered. One event equals one Function invocation. Pull, on the other hand (Kinesis & DynamoDB), is ordered and will retry indefinitely until data expires. Resource policies (used in the Push model) can be defined per Function and allow for cross-account access. IAM roles (used for Pull), can be used to derive permission from execution role to read data from a particular stream.

Pricing

Lambda uses a fine-grained pricing model based on the number of requests made AND the execution time of those requests. Each month, the first 1 million requests are free with a $0.20 charge per 1 million requests thereafter. Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms and takes into account the amount of memory allocated to a function. The execution cost is $0.00001667 for every GB-second used.

Additional details regarding the service can be found https://aws.amazon.com/lambda/. If you need help getting started with the service, contact us at 2nd Watch.

-Ryan Manikowski, Cloud Consultant

 

 

Facebooktwittergoogle_pluslinkedinmailrss

AWS Launches Visual App for CloudFormation

New visualization tool is the first native visualization for AWS infrastructure, and a groundbreaking development in the adoption of infrastructure as a code.

Cloud developers and architects use AWS CloudFormation to design, launch and update an AWS application or service architecture stack for repeatable workloads. CloudFormation provides templates for creating an entire environment, such as a website, using the JSON scripting language. Developers don’t need to figure out the order for provisioning AWS services or worry about the dependencies, as the free tool handles all the behind the scene configurations.

This is a huge help when you have workloads that must launch over and over again, saving time on provisioning and configuration. You can actually launch “with the click of a button.” On the other hand, CloudFormation files typically consist of thousands of lines of code, which doesn’t make them easy to modify or share.

Now, with the new AWS CloudFormation Designer, customers can view the details behind an AWS environment through a simpler, graphical view. You don’t need JSON expertise to collaborate on design and planning. The visual drag-and-drop interface is remarkably easy to use, compared with how cloud developers have been working thus far. Instead of writing several lines of code, you can draw a line on the screen connecting one resource to another.

The big picture

AWS CloudFormation Designer will expand the universe of IT people who can write AWS scripts and manage workloads, since they won’t need to know JSON. That means more hands on deck for new projects. After you create the visualization in the tool, you can launch the environment right then and there. We’ve seen that CloudFormation Designer reduces the time to create and launch a new workload from hours to minutes.

What’s also intriguing about this new feature is that it further affirms the infrastructure-as-code mindset. Consider the impact that tools like Visual Studio, Borland and IDEs have had on Microsoft .NET and Java developers. Developers can write code faster and reduce errors. Similarly, AWS has in effect created an interactive development environment (IDE) type tool for the cloud. This is a radical departure for the public cloud leader; AWS is known for its boxes and pipes, not sophisticated visual tools.

AWS CloudFormation Designer is a refreshing development for those who believe public cloud infrastructure will take over the world. One of the challenges with cloud computing adoption is that infrastructure as a service remains a new language to many corporate IT departments. AWS has made a smart move here by introducing more transparency to its platform, and opening the door for IT generalists to get involved. We’re excited that our customers and other companies will benefit from this new view of AWS.

-Kris Bliesner, Founder and CTO

Facebooktwittergoogle_pluslinkedinmailrss

AWS Device Farm Simplifies Mobile App Testing

Customized mobile device digital marketing gets a lot easier

When marketers think digital, they think mobile, but the best way to reach people on their smartphones is an app, not a website. Still, mobile apps are a double-edged sword for companies. They deliver more users with higher engagement but are also harder and more costly to develop and . Given that mobile devices are inherently connected, the first cloud services emerged to simplify app development. Mobile backends and SDKs like Facebook Parse, Kumulos or AWS Mobile Services tackled the backend services data management, synchronization, notification and analytics. Real world ing is the la service, courtesy of the AWS Device Farm, which provides virtual access to myriad mobile devices and operating environments. Device Farm, released in July, allows developers to easily apps on hundreds of combinations of hardware and OS (with a constantly growing list) using either custom scripts or a standard AWS compatibility suite. Although the service launched targeting the most acute problem, on fragmented Android, it now supports iOS as well. But the cloud service isn’t just able to provide instant access to a multitude of devices for hardware-specific s – it also allows ing on multiple devices in parallel, which greatly cuts time.

Bootstrapping mobile development with cloud services can yield huge dividends for organizations wanting to better connect with customers, employees and partners. Not only are there more mobile than desktop users, but their usage is heavier. The average adult in the US spends almost three hours per day consuming digital content on a mobile device, 11% more than just last year. This means that businesses without a mobile strategy, don’t have any digital strategy.

mobile-engagement_Meeker

The problem is that providing a richer, customized, differentiated experience requires building a custom mobile app – a task that’s made more daunting by the cornucopia of devices in use. It means supporting multiple versions of two operating systems and countless hardware variations. Although Apple users generally upgrade to the la iOS release within months, the la Android development stats show four versions with at least 13% usage. Worse yet, a 2015 OpenSignal survey of hundreds of thousands of Android devices found more than 24,000 distinct device types. Such diversity makes developing and thoroughly ing mobile apps vastly more complex than a website or PC application. One mobile app developer does QA ing on 400 different Android devices for every app – a ing nightmare that’s even worse when you consider that the mobile app release cycle is measured in weeks, not months. If ever a problem was in need of a virtualized cloud service, this is it; and AWS has delivered.

Android-OS-usage_0815

Device Farm takes an app archive (.apk file for Android or .ipa for iOS) and s it against either custom scripts or an AWS compatibility suite using a fuzz of random events. Test projects are comprised of the actual suite (Device Farm supports five scripting languages), a device pool (specific hardware and OS versions) and any predefined device state such as other installed apps, required local data and device location. Aggregate results are presented on a summary screen with details, including any screenshots, performance data and log file output, available for each device.

DevFarm-config-1

DevFarm-config-2

DevFarm-config-3

DevFarm-config-app;

Device Farm doesn’t replace the need for in-field beta ing and mobile app instrumentation to measure real world usage, performance and failures, however with thorough, well-crafted suites and a diverse mix of device types, it promises to dramatically improve the end-user experience by eliminating problems that only manifest when running on actual hardware instead of an IDE simulator.

Developers can automate and schedule s using the Device Farm API or via Jenkins using the AWS plugin. Like every AWS service, pricing is usage based, where the metric is the total time for each device at $0.17 per device minute, however by judiciously selecting the device pool, it’s much cheaper than buying and configuring the actual hardware. Developers can automate and schedule s using the Device Farm API or via Jenkins using the AWS plugin. Like every AWS service, pricing is usage based, where the metric is the total time for each device at $0.17 per device minute.

Along with Mobile Services for backend infrastructure, Device Farm makes a compelling mobile app development platform, particularly for organizations already using AWS for website and app development.

To learn more about AWS Device Farm or to get started on your Digital Marketing initiatives, contact us.

-2nd Watch blog by Kurt Marko

Facebooktwittergoogle_pluslinkedinmailrss