1-888-317-7920 info@2ndwatch.com

The Top 5 Things to Avoid at AWS re:Invent 2016

The fifth annual AWS re:Invent is right around the corner, taking place November 28-December 2 in Las Vegas, Nevada.  Designed for AWS customers, enthusiasts and even cloud computing newcomers, the nearly week-long conference is a great source of information and education for attendees of all skill levels. AWS re:Invent is THE place to connect, engage and discuss current AWS products and services via breakout sessions ranging from introductory and advanced to expert as well as to hear the la news and announcements from key AWS executives, partners and customers. This year’s agenda offers a full additional day of content, boot camps, hands-on labs, workshops and the re:Source Mini Con—a full-day technical deep dive into topics such as Big Data, Containers, IoT, Machine Learning, Security Services, and Serverless Computing.

2nd Watch is proud to be a 2016 Platinum Sponsor and attend AWS re:Invent for the fifth consecutive year. As an AWS Premier Partner with four re:Invent conferences under our belts, we have a unique insight into what attendees can expect at re:Invent 2016.  With that, our seasoned re:Invent alumni have compiled a list of the Top Five Things to Avoid at re:Invent 2016.

1. Avoid the long lines at Registration (and at the Swag Counter!)

In previous years, the re:Invent Registration Desk was open at the Venetian on Monday & Tuesday only. This year, registration is open starting Sunday evening from 6pm-8pm, giving attendees a few extra hours to check in and secure their conference badges. In addition to the extra two hours on Sunday evening, AWS is opening registration to attendees at two new locations—The Mirage and Encore—and will remain open all week.  Of course, you can’t attend any part of re:Invent until you have your conference badge so be sure you check into Registration as early as possible.  This will also ensure that you get the size shirt you want from the Swag Counter!

Expert Tip:  This year, AWS has added an additional level of security and will be printing each attendee’s photograph onto their badge.  Don’t be “that guy or gal” that holds up the registration line because you have to have your photo taken on site.  Take a few minutes to upload your photo prior to re:Invent here.  BONUS: By uploading your own photo, you make sure to put your best face forward for the week.

2. Avoid Arriving Without a Plan:

The worst thing you can do at re:Invent is show up without a plan for how you will spend your week in Vegas—that includes the breakout sessions you want to attend.  With two new venues and a total of over 400 sessions (twice as many as 2015), more hands-on labs, boot camps and one-on-one engagement opportunities, AWS re:Invent 2016 offers more breadth and depth and more chances to learn from the experts than ever before.

If you haven’t already done so, be sure to check out the AWS Session Planner and start selecting the sessions that matter most to you.  For the first time, AWS is offering reserved seating for breakout sessions, which means you have the chance to build a fully-customized conference calendar.  Take advantage of this benefit by adding sessions to your re:Invent calendar now.  By doing so, you will automatically have a seat reserved in your name and you can rest easy knowing you get to attend the sessions that are most important to you.

If you need some scheduling inspiration, 2nd Watch is presenting “Lift and Evolve: Saving Money in the Cloud is Easy, Making Money Takes Help” on Tuesday, 11/29. Part of the Enterprise & Migration track, this one-hour breakout session story will dive deep into the technology that allows The Coca-Cola Company to manage hundreds of AWS Accounts, hundreds of workloads, thousands of instances, and hundreds of business partners around the globe. Find it in the session catalogue by searching for “ENT206,” then be sure to add it to your AWS re:Invent calendar to reserve your seat.

Expert Tip: Be sure to download the AWS re:Invent Mobile App. The mobile app will be available approximately two weeks prior to re:Invent for iOS and Android only. For Windows phone users, there will be a mobile web version available.

The mobile app will hold your re:Invent schedule, maps from venue to venue, all other activities and reminders and will definitely help with your overall conference experience.

Lastly, when it comes to session planning, be sure to keep in mind that breakout sessions are being offered at all three venues—The Venetian, The Mirage, and Encore.  Allowing enough time to get from session to session will be key.  This year, AWS has added 30 minute breaks between sessions, but given there will be nearly 30,000 people expected at re:Invent this year, escalators, elevators and hallways are most certainly going to be difficult to navigate. As long as you don’t stop at the casino floor to your luck, you should still make it to the next session in ample time. Here are the estimated walking times between venues to help you plan accordingly:

  • Venetian to Mirage: 20-minute walk
  • Venetian to Encore: 15-minute walk
  • Encore to Mirage: 20-minute walk

For additional information about the AWS re:Invent agenda or to customize your attendee experience, visit https://reinvent.awsevents.com.

3. Avoid Sleeping In, Being Late or Skipping Out Entirely

With so many learning and networking opportunities, it’s easy to get caught up in exciting—yet exhaustive—days full of breakout sessions, hands-on labs, training sessions and of course, after-hours activities and parties.  Only you know how to make the most of your time at re:Invent, but if we can offer some advice…be sure to get plenty of sleep and avoid sleeping in, getting to sessions late or worse…skipping out on morning sessions entirely.  Especially when it comes to the keynote sessions on Wednesday and Thursday morning!

AWS CEO, Andy Jassy, will present the Wednesday morning keynote, while Amazon CTO, Werner Vogels, will present on Thursday morning.  Both Keynotes will be full of exciting product announcements, enhancements and feature additions as well as cool technical content and enterprise customer success stories.  Don’t be the last to know because you inadvertently over slept and/or partied a little too hard the night before!

Customers don’t need to reserve a seat in either of the keynotes, however, there is a cap on the number of attendees who can watch the keynote in the keynote hall. Keynotes are offered on a first come, first served basis, so be sure to get there early.

Expert Tip: If you don’t want to wait in line to sit in the keynote hall, AWS will have many options for watching the keynote in overflow rooms. If you’re still groggy from the previous night’s events, the overflow rooms are an ideal place where you can watch the keynote with a bloody mary, mimosa or coffee.

4. Avoid Being Anti-Social

AWS re:Invent is one of the best locations to network and connect with like-minded peers and cloud experts, discover new partner offerings and, of course, let loose at the quirky after-hours experiences, attendee parties, and partner-sponsored events.

Avoid being anti-social by taking advantage of the many opportunities to network with others and meet new people. AWS has some great activities planned for conference goers including the famed Totonka wing eating con, Welcome Reception, Partner Pub Crawl, Harley Ride, re:Play Party and new this year, the re:Invent 5k Run.

Expert Tip: Don’t forget to bring plenty of business cards.  With so many people to meet, opportunities to connect with peers and experts, and after-hours parties to attend, you’ll want to make sure to pack extra cards to avoid running out early in the week.  When you receive a business card from someone else, try to immediately take a photo of it with your smartphone and save it to a photo album dedicated solely to networking.  This will ensure that you have the details stored somewhere should you happen to misplace an important contact’s business card.

5. Avoid Forgetting to Pack That All-Too-Important Item

Whether you’re staying at The Venetian, Mirage, Encore or other property, your hotel room will be your home away from home for nearly an entire week.  Of course, every hotel will have in-room amenities and travel essentials, but inevitably, we all will forget that one important item that we won’t be able to live without, especially for an entire week.  Our experts have pulled together a check list to help you pack for your trip and ensure you have all the comforts of home and office during your week in Vegas.

Your Favorite Toiletries:

Not everyone is in love with the in-room toiletries that hotels have to offer in each of their suites. If you have a favorite, be sure to bring it. Here is a quick list to ensure you don’t forget something:

  • Shampoo
  • Conditioner
  • Soap
  • Shave Cream
  • After Shave
  • Razor
  • Deodorant
  • Lotion
  • Toothbrush
  • Toothpaste
  • Mouthwash
  • Floss
  • Hair Styling Products (if that’s your thing)
  • Contact Case & Solution
  • Spare Pair of Contacts
  • Cologne/Perfume/Body Spray

First Aid:

Whether your headache or hangover cure calls for Aspirin, Ibuprophen or something stronger, it’s a good idea to pack your preferred treatment along with any other first aid remedies and prescription medications you might need. Band Aids, blister protectors and anti-histamines are also recommended.

Chapstick & Lotion:

It is the desert, after all, and with dry air circulating throughout the venues, your skin (including your lips) is bound to dry out.  We recommend bringing medicated chapstick and fragrance-free lotion (fragrances in most lotions can often dry out your skin even more!) and keeping a spare with you at all times.

Breath Mints and/or Mint-flavored Gum:

No explanation necessary.

Business cards:

This is a repeat from one of our other tips but an important one to remember, so we don’t mind mentioning it again.

Chargers & Battery Packs:

Nothing is worse than being in between sessions with a 10% cell phone or laptop battery and realizing you left your chargers back in your room. We recommend bringing at least two phone chargers and two laptop chargers: One for your room and one for the backpack or briefcase you’ll be carrying throughout the conference.  Additionally, while there will be several charging stations throughout re:Invent (and outlets on most every wall), it’s a good idea to bring a battery pack with several hours of charging time just in case you can’t find an open spot to plug in.

Water Bottle:

You will definitely want to stay hydrated throughout the week, and the tiny cups offered at the water stations just won’t quench your thirst quite the way you will need them to.  It’s a good idea to pack a water bottle (we recommend one that can hold 17 oz) so that you avoid having to refill often and have plenty of thirst-quenching liquid to keep you hydrated throughout the day.

Comfortable shoes:

Your shoes will be your saving grace by the end of each day, so be sure to bring a pair or two that you feel comfortable walking several thousands of steps in.

Appropriate Attire:

While business casual attire is often recommended at re:Invent, there can be many interpretations of what is appropriate.  Our advice is to pack clothing that you would feel confident wearing should you run into your boss or someone you wish to impress.  Jeans are perfectly acceptable in either case, but make sure to use good judgement overall when selecting your attire for sessions, dinners and parties you plan to attend.

Cash:

In addition to needing cash for meals on the go, bar tabs or that faux diamond-encrusted figurine you’ve been eyeing in the gift shop, you’ll want to bring a little extra cash if you plan to your luck at any of the casinos.  There are ATMs on the casino floors, but they typically charge a service fee in the amount of $3-$5 in addition to your bank’s own service fees.

Notebook & Pen/Pencil:

It’s always a good idea to bring a good ole’ fashioned notebook with you to your sessions.  Not only is it a fail-proof way to capture the handy tips and tricks you’ll be learning, it’s also the quie.  Think about it – if 100 people in your breakout session were all taking notes on a laptop, it would be pretty distracting.  Be bold. Be respectful. Be the guy/gal that uses paper and a pen.

A Few Final Thoughts

Whether this is your first trip to AWS re:Invent or you’re a seasoned re:Invent pro, you’re sure to walk away with an increased knowledge of how cloud computing can better help your business, tips and tricks for navigating new AWS products and features, and a week’s worth of memories that will last a lifetime.  We hope you make the most of your re:Invent 2016 experience and take advantage of the incredible education and networking opportunities that AWS has in store this year.

Last but certainly not least, we hope you take a moment during your busy week to visit the 2nd Watch Cloud Solutions Center located at booth #825 in the Expo Hall where you can explore 2nd Watch’s Managed Cloud Services, pick up a cool 2nd Watch t-shirt and find out how you can win one of two Segway Mini Pros.  We’re excited to attend our fifth AWS re:Invent and look forward to seeing you there!

-Katie Ellis, Marketing Manager

Facebooktwittergoogle_pluslinkedinmailrss

Benchmarking Amazon Aurora

Amazon Web Services, Microsoft Azure and Google Cloud Platform are the clear leaders in the Cloud Service Provider (CSP) space. The competition between these players drives innovation and results in customers receiving more benefits and better business outcomes over older technologies. The database as a service space is a massive opportunity for customers to gain more in terms of performance, scalability, and high-availability without the overhead or long-term contracts from traditional solutions.

At 2nd Watch we see a lot of customers who want to utilize database as a service to get better performance without the overhead or having all the hassles of traditional database products.  In that, we watch all three Cloud Service Providers very closely, their services and run benchmark comparisons on their products. In our opinion, Amazon Aurora is probably the best-performing database as a service platform that we’ve seen to date, while still being very cost-effective for the performance benefits.

Amazon Aurora is AWS’s fully-managed MySQL-compatible relational database service. Aurora was built to provide commercial-grade performance and availability, while being as easy to use and as cost-effective as an open source engine. Naturally, a lot of people are interested in Aurora’s performance, and a number of people have tried to benchmark it. Benchmarking accurately is not always easy to do, and it’s why you see different results from various benchmarks.  We’ve helped many of our customers benchmark Aurora to understand how it would perform to meet their needs, and in many cases we have migrated, or are actively working on migrating, customers to Aurora.

Recently, Google benchmarked their own Cloud SQL data against Amazon Aurora. Since a number of our customers use Aurora, we were interested in digging in to take a deeper look at their findings.

The chart below shows the benchmark results published by Google.

Graph 1

Original Source: Google. See: https://cloudplatform.googleblog.com/2016/08/Cloud-SQL-Second-Generation-performance-and-feature-deep-dive.html

In their blog post about this comparison, the blog claims that 1) Cloud SQL performs better at low thread counts and 2) customers do not use a lot of threads, so the results at higher thread counts do not matter. Our experience is that the latter claim is not true, especially as we look at our enterprise customers.  So, we questioned our results from previous benchmarking. Therefore, we decided to benchmark things again to see if our results would fall in line with Google’s results. Below you can see an overlay of our data on top of their original graph. Since we do not have the original results from Google’s , an overlay is the best we can do for comparison.

To perform our own benchmarks, we used Terraform to stand up a VPC, our instances of Aurora, and the hosts from which we’d run the sysbench s, all within the same Availability Zone.  We used the same settings and commands as outlined in the Google blog post to against our results.  We shared our results with the Aurora team at AWS, and they confirmed that they saw very similar results when they ed Aurora with sysbench in a configuration similar to ours.

Full data from our s, R scripts, and automation to stand up the infrastructure can be found on github.

Here is a summary of our observations on this study:

1.The data for Aurora’s performance is incorrect

This data does not match the Aurora performance we see when we run the ourselves. Without any special tuning, we saw higher performance results for Aurora. Without the original data from the s Google conducted it is difficult to say why their results differ. All we can say is that, from our ing, Aurora does appear to be more performant and is clearly the database of choice, especially when dealing with higher thread counts.

2nd Watch overlay in purple:

Graph 2

Results from our s on Aurora:

Both studies agree that Aurora has a significant performance advantage above a certain thread count. Google’s shows the previous release of Aurora having an advantage at 16 threads or more (Cloud SQL starts higher but then drops below Aurora). Our shows the la version of Aurora having an advantage at 8 threads, with results between the two databases being fairly comparable at 4 or fewer threads.

2.The study makes a claim that customers don’t need many threads

Google is suggesting that you should only focus on the results with lower thread counts. Their blog post says that Cloud SQL “performed better than Aurora when active thread count is low, as is typical for many web applications.” The claim that low thread counts are typical for web applications (and thus what people should focus on) is inaccurate. There are a large number of AWS customers who run applications using Aurora, and it is very common for them to run with hundreds of threads (and many customers choose to run with thousands). As we mentioned previously, both our study and Google’s study show that Aurora has an advantage at higher thread counts.

There are a number of areas where Amazon improved on the performance of MySQL when developing Aurora. One of them was taking advantage of high thread counts to deliver better performance. With Aurora, customers with higher thread counts will typically see a large performance advantage over MySQL and Cloud SQL.

3.Aurora’s largest instance outperforms Cloud SQL’s largest instance by 3X

Google’s benchmark was done on an r3.4XL, which is only half the size of Aurora’s largest instance (the r3.8XL). We understand that this study used Cloud SQL’s largest machine and that they were trying to compare to a comparable Aurora instance size. But a customer is more likely to run a workload like the one in Google’s benchmark on Aurora’s largest machine, the r3.8XL, because the 8XL’s larger cache would improve performance. We ran the with Aurora’s r3.8XL instance, and the results were more than 3X higher than Cloud SQL’s performance. It was a struggle to fit Aurora’s r3.8XL performance on the same scale.

Aurora on r3.8xlarge (scale adjusted):

Graph 3

A few more things to consider

We’ve already pointed out Aurora’s performance advantage, which is evident in both studies. Aurora has a number of other advantages over Google Cloud SQL. One example is scalability. We showed earlier that Aurora’s largest instance exceeds the performance of Cloud SQL’s largest instance by 3X in this benchmark. Aurora can scale up to handle more storage (64 TB for Aurora vs 10 TB for Cloud SQL). Aurora supports up to 15 low-latency read replicas that have very low replica lag. Aurora replica lag is typically only a few milliseconds, whereas traditional binlog-based replication, which is used by MySQL and Cloud SQL, can result in replica lag on the order of seconds or even minutes. Aurora allows up to 15 replicas that can act as failover targets without impacting the performance of the primary instance, whereas Cloud SQL allows only one failover target. Another advantage is that Aurora has extremely fast crash recovery, as its log-structured storage system does not require the replay of redo logs after a database crash.

Benchmarking Aurora

If you’re interested in benchmarking Aurora, check out AWS’s Aurora benchmarking guide here.

If you’re thinking about benchmarking Aurora yourself, there are two common errors to watch out for. First, make sure you put the database and the client in the same Availability Zone or you will pay a latency and throughput penalty. This is what customers actually do when they use Aurora in production. Second, make sure that the client instance driving traffic to Aurora has enhanced networking enabled. The benchmarking guide mentioned above has instructions for how to do that.

Making good use of benchmarking data is tricky. You’re trying to build or migrate your application and you want to understand how your new database is likely to behave, now and in the future. There’s no magic benchmark that will exactly match your production workload, but you need to make a decision anyway. What should you do?

Here’s some general advice we try to follow for our own projects as well.

  1. Plan for success by ing at scale: The last thing you want to do is make design choices that hurt you when your application starts to really take off. That’s when you want to celebrate with your team and build the next great feature, not attempt an emergency database migration. Think about how your workload might change as your application grows (higher thread counts, more data, higher TPS, more tables…) and build that into your s.
  2. Choose benchmarking s that align to your application: Do you expect your data set to fit in memory or will your database be disk-bound? Will your traffic pattern be spiky? Write-heavy? You can find a large number of benchmarks for any major technology. Try to identify a set of s that are as close as possible to your real-world application. Don’t just accept the first benchmarking study you find. If you can re-run your actual production workload as a , even better.
  3. Know that your benchmarks are not the real world and plan for that: The best benchmarking study you run will be, at best, slightly wrong (not the same as the real world). Make a note of the assumptions you made in your and keep an eye on your database in production. Watch for signs that your production workload is moving into uned areas and adjust your s accordingly.
  4. Get help: The best way to get expert advice is to see an expert. Cloud technology is complex and not something you should seek guidance on from partners who are “generalists.” To Aurora at your company, visit us at www.2ndwatch.com.

 

-Chris Nolan, Director of Product & Lars Cromley, Sr. Product Manager

Facebooktwittergoogle_pluslinkedinmailrss

AWS a Leader Again in 2016 Gartner Magic Quadrant

Gartner’s 2016 Magic Quadrant for Cloud Infrastructure as a Service, Worldwide was released today, evaluating 10 different vendors for ‘completeness of vision’ and ‘ability to execute.’ Although AWS continues to face more competition, we know it remains a market share leader in the industry. Gartner has placed AWS as having both the furthest completeness of vision and the highest ability to execute.

Because the market for cloud IaaS is in a state of upheaval, with many service providers shifting their strategies after failing to gain enough market traction, 2nd Watch recommends utilizing Gartner’s Magic Quadrant to help eliminate the confusion around the various providers in the Infrastructure as a Service sector.

Access the Gartner Report

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

-Nicole Maus, Marketing Manager

Facebooktwittergoogle_pluslinkedinmailrss

And the top AWS products of Q1 2016 are…

AWS is an innovation lab. The world’s top cloud provider releases hundreds of updates and dozens of major services every year. So, which products are companies loving right now?

We analyzed data from our customers, across a combined 100,000+ instances running monthly, for Q1 of 2016. The most popular AWS products, represented by the percentage of 2nd Watch customers deploying them in the first quarter, include Amazon S3 for storage and Data Transfer (100% each), EC2 (99%), SNS or Simple Notification Service (89%) and Key Management Service for encryption (87%). These services are standard in most AWS deployments, and have been consistent in the last year or so – no surprises here.

Perhaps less predictable was the use of other AWS products, such as Redshift, the data warehouse service introduced in 2012 as a low-cost alternative to systems from HP, Oracle and IBM. The fact that 17% of our customers are using Redshift demonstrates how quickly innovative cloud technology can carve a strong position in a legacy software market. Enterprises are starting to move away from legacy systems to Redshift because it can handle massive quantities of data with exceptional response times.

Other relatively new AWS products making rapid progress with AWS users include the high-performing NoSQL database service Dynamo DB (27%), Lambda, an automated compute management platform (21%) and Workspaces, a secure virtual desktop service (19%).

Just three years ago, enterprises were primarily using the core compute and storage services on AWS. As companies become more comfortable moving business-critical IT assets into the cloud, they’re more likely to leverage the broader AWS portfolio. We expect growth in areas such as database, desktop and management tools to continue in the coming months.

Download the Top 30 AWS Products infographic to find out which others made the list.

-Jeff Aden, EVP Strategic Business Development & Marketing

Facebooktwittergoogle_pluslinkedinmailrss

Database Migration Service for RDS

For database administrators and engineers, migrating a database can be a major headache. It’s such a headache that it actually prohibits some teams from migrating to AWS’ Relational Database Service (RDS), even though doing so would save them time and money in the long run.

Imagine you’re a DBA for Small Business Y and you want to manage your data in RDS, but you have three terabytes of data with a ton of tables, foreign keys and many dependencies. Back in the day, migrating your SQL Server database to RDS might have looked something like this:

  1. Coordinate with Product Leads to find a time when your business can handle up to a 24-hour outage of source database.
  2. Dump all the existing data into a backup.
  3. Restore the data on an intermediary EC2 SQL Server instance.
  4. Connect to the new RDS instance.
  5. Generate metadata script from the source or intermediary instance.
  6. Execute metadata script on target RDS instance.
  7. Migrate the data to the RDS instance using SQL Server’s Import tool.
  8. Optional: Encounter complications such as import failures, loss of metadata integrity, loss of data, and extremely slow import speeds.
  9. Cry a lot and then sleep for days.

Enter AWS Database Migration Service. This new tool from AWS allows DBAs to complete migrations to RDS, or even to a database instance on EC2, with minimal steps and minimal downtime to the source database. What does that mean for you? No 2AM migrations and no tears.

Migrations with AWS DMS have three simple steps:

  1. Provision a replication instance
  2. Define source and target endpoints
  3. Create one or more tasks to migrate data between source and target

The service automates table-by-table migration into your target database without having to use any backups, dump files, or manually administering an intermediary database server. In addition, you can use the service to continue replicating changes from the source environment until you are ready to cutover. That means your application’s downtime will be just a few minutes instead of several hours or even days.

Another great feature of DMS is that you can watch the progress of each table’s migration in the AWS console. The dashboard allows users to see, in real time, how the migration is going and see exactly where any breakages occur.

Database Migration Svc 1

 

Database Migration Svc 2

If you are planning to use the AWS Database Migration Service soon, here are a few tips to streamline your process:

  • In your target instance, set up your table structure prior to migrating the data. The service will not automatically set up your foreign keys and indexes. Set up your metadata from AWS schema conversion tool (or simple mysqldump). Make sure to use truncate option during schema import so that the tables you create aren’t wiped.
  • If you think you may have extra-large LOBs, use Full LOB Mode to avoid data truncation.
  • Additional best practices for using DMS can be found here.

Enjoy!

-Trish Clark, Oracle DBA

Facebooktwittergoogle_pluslinkedinmailrss

Enter AWS Lambda

AWS has managed to transform the traditional datacenter model into a feature-rich platform and has been constantly adding new services to meet business and consumer needs. As virtualization has changed the way infrastructure is now built and managed, the ‘serverless’ execution model has become a viable method of reducing costs and simplifying management. A few years ago, the infrastructure required to host a typical application or service required the setup and management of physical hardware, operating systems and application code. AWS’ offerings have grown to include services such as RDS, SES, DynamoDB and ElastiCache which provide a subset of functionality without the requirement of having to manage the entire underlying infrastructure on which those services actually run.

Enter AWS Lambda.

Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own back-end services that operate at AWS scale, performance, and security.

In a nutshell, Lambda provides a service that executes custom code without having to manage the underlying infrastructure on which that code is executed. The administration of the underlying compute resources, including server and operating system maintenance, capacity provisioning, automatic scaling, code monitoring, logging, and code and security patch deployment are eliminated. With AWS Lambda, you pay only for what you use, and are charged based on the number of requests for your functions and the time your code executes. This allows you to eliminate the overhead of paying for instances (by the hour or reserved) and their administration. Why build an entire house if all you need is a kitchen so you can cook dinner? In addition, the service also automatically scales to meet capacity requirements. Again, less complexity and overhead than managing EC2 Auto Scale Groups.

Lambda’s compute service currently supports Node.js (JavaScript), Python, and Java (Java 8 compatible). Your code can include existing libraries, even native ones. Code is executed as what is referred to as a Function.

Here’s AWS’ Jeff Barr’s simple description of the service and how it works:

You upload your code and then specify context information to AWS Lambda to create a function. The context information specifies the execution environment (language, memory requirements, a timeout period, and IAM role) and also points to the function you’d like to invoke within your code. The code and the metadata are durably stored in AWS and can later be referred to by name or by ARN (Amazon Resource Name). You can also include any necessary third-party libraries in the upload (which takes the form of a single ZIP file per function).

After uploading, you associate your function with specific AWS resources (a particular S3 bucket, DynamoDB table, or Kinesis stream). Lambda will then arrange to route events (generally signifying that the resource has changed) to your function.

When a resource changes, Lambda will execute any functions that are associated with it. It will launch and manage compute resources as needed in order to keep up with incoming requests. You don’t need to worry about this; Lambda will manage the resources for you and will shut them down if they are no longer needed.

Lambda Functions can be invoked by triggers from changes in state or data from services such as S3, DynamoDB, Kinesis, SNS and CloudTrail, after which, the output can then be sent back to those same services (though it does not have to be). It handles listening, polling, queuing and auto-scaling and spins up as many workers as needed match the rate change of source data.

A few common use cases include:

  • S3 + Lambda (Dynamic data ingestion) – Image re-sizing, Video Transcoding, Indexing, Log Processing
  • Direct Call + Lambda (Serverless backend) – Microservices, Mobile backends, IoT backends
  • Kinesis + Lambda (Live Stream Processing) – Transaction Processing, Stream analysis, Telemetry and Metering
  • SNS + Lambda (Custom Messages) – Automating alarm responses, IT Auditing, Text to Email Push

Additionally, data can be sent in parallel to separate Functions to decrease the amount of time required for data that must be processed or manipulated multiple times. This could theoretically be used to perform real-time analytics and data aggregation from a source such as Kinesis.

Function overview

  • Memory is specified ranging from 128MB to 1GB, in 64MB increments. Disk, network and compute resources are provisioned based on the memory footprint. Lambda tells you how much memory is used, so this setting can be tuned.
  • They can be invoked on-demand via the CLI and AWS Console, or subscribed to one or multiple event sources (e.g. S3, SNS). And you can reuse the same Function for those event sources.
  • Granular permissions can be applied via IAM such as IAM Roles. At a minimum, logging to CloudWatch is recommended.
  • Limits to resource allocation such as 512MB /tmp space, 1024 file descriptors and 50MB deployment package size can be found at http://docs.aws.amazon.com/lambda/la/dg/limits.html.
  • Multiple deployment options exist including direct authoring via the AWS Console, packaging code as a zip, and 3rd party plugins (Grunt, Jenkins).
  • Stateless data means depending on another service such as S3 or DynamoDB to retain persistence.
  • Monitoring and debugging can be accomplished using the Console Dashboard to view CloudWatch metrics such as requests, errors, latency and throttling.

Invoking Lambda functions can be achieved using Push or Pull methods. In the event of a Push from S3 or SNS, retries occur automatically 3 times and is unordered. One event equals one Function invocation. Pull, on the other hand (Kinesis & DynamoDB), is ordered and will retry indefinitely until data expires. Resource policies (used in the Push model) can be defined per Function and allow for cross-account access. IAM roles (used for Pull), can be used to derive permission from execution role to read data from a particular stream.

Pricing

Lambda uses a fine-grained pricing model based on the number of requests made AND the execution time of those requests. Each month, the first 1 million requests are free with a $0.20 charge per 1 million requests thereafter. Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms and takes into account the amount of memory allocated to a function. The execution cost is $0.00001667 for every GB-second used.

Additional details regarding the service can be found https://aws.amazon.com/lambda/. If you need help getting started with the service, contact us at 2nd Watch.

-Ryan Manikowski, Cloud Consultant

 

 

Facebooktwittergoogle_pluslinkedinmailrss