1-888-317-7920 info@2ndwatch.com

AWS has managed to transform the traditional datacenter model into a feature-rich platform and has been constantly adding new services to meet business and consumer needs. As virtualization has changed the way infrastructure is now built and managed, the ‘serverless’ execution model has become a viable method of reducing costs and simplifying management. A few years ago, the infrastructure required to host a typical application or service required the setup and management of physical hardware, operating systems and application code. AWS’ offerings have grown to include services such as RDS, SES, DynamoDB and ElastiCache which provide a subset of functionality without the requirement of having to manage the entire underlying infrastructure on which those services actually run.

Enter AWS Lambda.

Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own back-end services that operate at AWS scale, performance, and security.

In a nutshell, Lambda provides a service that executes custom code without having to manage the underlying infrastructure on which that code is executed. The administration of the underlying compute resources, including server and operating system maintenance, capacity provisioning, automatic scaling, code monitoring, logging, and code and security patch deployment are eliminated. With AWS Lambda, you pay only for what you use, and are charged based on the number of requests for your functions and the time your code executes. This allows you to eliminate the overhead of paying for instances (by the hour or reserved) and their administration. Why build an entire house if all you need is a kitchen so you can cook dinner? In addition, the service also automatically scales to meet capacity requirements. Again, less complexity and overhead than managing EC2 Auto Scale Groups.

Lambda’s compute service currently supports Node.js (JavaScript), Python, and Java (Java 8 compatible). Your code can include existing libraries, even native ones. Code is executed as what is referred to as a Function.

Here’s AWS’ Jeff Barr’s simple description of the service and how it works:

You upload your code and then specify context information to AWS Lambda to create a function. The context information specifies the execution environment (language, memory requirements, a timeout period, and IAM role) and also points to the function you’d like to invoke within your code. The code and the metadata are durably stored in AWS and can later be referred to by name or by ARN (Amazon Resource Name). You can also include any necessary third-party libraries in the upload (which takes the form of a single ZIP file per function).

After uploading, you associate your function with specific AWS resources (a particular S3 bucket, DynamoDB table, or Kinesis stream). Lambda will then arrange to route events (generally signifying that the resource has changed) to your function.

When a resource changes, Lambda will execute any functions that are associated with it. It will launch and manage compute resources as needed in order to keep up with incoming requests. You don’t need to worry about this; Lambda will manage the resources for you and will shut them down if they are no longer needed.

Lambda Functions can be invoked by triggers from changes in state or data from services such as S3, DynamoDB, Kinesis, SNS and CloudTrail, after which, the output can then be sent back to those same services (though it does not have to be). It handles listening, polling, queuing and auto-scaling and spins up as many workers as needed match the rate change of source data.

A few common use cases include:

  • S3 + Lambda (Dynamic data ingestion) – Image re-sizing, Video Transcoding, Indexing, Log Processing
  • Direct Call + Lambda (Serverless backend) – Microservices, Mobile backends, IoT backends
  • Kinesis + Lambda (Live Stream Processing) – Transaction Processing, Stream analysis, Telemetry and Metering
  • SNS + Lambda (Custom Messages) – Automating alarm responses, IT Auditing, Text to Email Push

Additionally, data can be sent in parallel to separate Functions to decrease the amount of time required for data that must be processed or manipulated multiple times. This could theoretically be used to perform real-time analytics and data aggregation from a source such as Kinesis.

Function overview

  • Memory is specified ranging from 128MB to 1GB, in 64MB increments. Disk, network and compute resources are provisioned based on the memory footprint. Lambda tells you how much memory is used, so this setting can be tuned.
  • They can be invoked on-demand via the CLI and AWS Console, or subscribed to one or multiple event sources (e.g. S3, SNS). And you can reuse the same Function for those event sources.
  • Granular permissions can be applied via IAM such as IAM Roles. At a minimum, logging to CloudWatch is recommended.
  • Limits to resource allocation such as 512MB /tmp space, 1024 file descriptors and 50MB deployment package size can be found at http://docs.aws.amazon.com/lambda/la/dg/limits.html.
  • Multiple deployment options exist including direct authoring via the AWS Console, packaging code as a zip, and 3rd party plugins (Grunt, Jenkins).
  • Stateless data means depending on another service such as S3 or DynamoDB to retain persistence.
  • Monitoring and debugging can be accomplished using the Console Dashboard to view CloudWatch metrics such as requests, errors, latency and throttling.

Invoking Lambda functions can be achieved using Push or Pull methods. In the event of a Push from S3 or SNS, retries occur automatically 3 times and is unordered. One event equals one Function invocation. Pull, on the other hand (Kinesis & DynamoDB), is ordered and will retry indefinitely until data expires. Resource policies (used in the Push model) can be defined per Function and allow for cross-account access. IAM roles (used for Pull), can be used to derive permission from execution role to read data from a particular stream.

Pricing

Lambda uses a fine-grained pricing model based on the number of requests made AND the execution time of those requests. Each month, the first 1 million requests are free with a $0.20 charge per 1 million requests thereafter. Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms and takes into account the amount of memory allocated to a function. The execution cost is $0.00001667 for every GB-second used.

Additional details regarding the service can be found https://aws.amazon.com/lambda/. If you need help getting started with the service, contact us at 2nd Watch.

-Ryan Manikowski, Cloud Consultant

 

 

Facebooktwittergoogle_pluslinkedinmailrss