1-888-317-7920 info@2ndwatch.com

AWS Device Farm Simplifies Mobile App Testing

Customized mobile device digital marketing gets a lot easier

When marketers think digital, they think mobile, but the best way to reach people on their smartphones is an app, not a website. Still, mobile apps are a double-edged sword for companies. They deliver more users with higher engagement but are also harder and more costly to develop and . Given that mobile devices are inherently connected, the first cloud services emerged to simplify app development. Mobile backends and SDKs like Facebook Parse, Kumulos or AWS Mobile Services tackled the backend services data management, synchronization, notification and analytics. Real world ing is the la service, courtesy of the AWS Device Farm, which provides virtual access to myriad mobile devices and operating environments. Device Farm, released in July, allows developers to easily apps on hundreds of combinations of hardware and OS (with a constantly growing list) using either custom scripts or a standard AWS compatibility suite. Although the service launched targeting the most acute problem, on fragmented Android, it now supports iOS as well. But the cloud service isn’t just able to provide instant access to a multitude of devices for hardware-specific s – it also allows ing on multiple devices in parallel, which greatly cuts time.

Bootstrapping mobile development with cloud services can yield huge dividends for organizations wanting to better connect with customers, employees and partners. Not only are there more mobile than desktop users, but their usage is heavier. The average adult in the US spends almost three hours per day consuming digital content on a mobile device, 11% more than just last year. This means that businesses without a mobile strategy, don’t have any digital strategy.

mobile-engagement_Meeker

The problem is that providing a richer, customized, differentiated experience requires building a custom mobile app – a task that’s made more daunting by the cornucopia of devices in use. It means supporting multiple versions of two operating systems and countless hardware variations. Although Apple users generally upgrade to the la iOS release within months, the la Android development stats show four versions with at least 13% usage. Worse yet, a 2015 OpenSignal survey of hundreds of thousands of Android devices found more than 24,000 distinct device types. Such diversity makes developing and thoroughly ing mobile apps vastly more complex than a website or PC application. One mobile app developer does QA ing on 400 different Android devices for every app – a ing nightmare that’s even worse when you consider that the mobile app release cycle is measured in weeks, not months. If ever a problem was in need of a virtualized cloud service, this is it; and AWS has delivered.

Android-OS-usage_0815

Device Farm takes an app archive (.apk file for Android or .ipa for iOS) and s it against either custom scripts or an AWS compatibility suite using a fuzz of random events. Test projects are comprised of the actual suite (Device Farm supports five scripting languages), a device pool (specific hardware and OS versions) and any predefined device state such as other installed apps, required local data and device location. Aggregate results are presented on a summary screen with details, including any screenshots, performance data and log file output, available for each device.

DevFarm-config-1

DevFarm-config-2

DevFarm-config-3

DevFarm-config-app;

Device Farm doesn’t replace the need for in-field beta ing and mobile app instrumentation to measure real world usage, performance and failures, however with thorough, well-crafted suites and a diverse mix of device types, it promises to dramatically improve the end-user experience by eliminating problems that only manifest when running on actual hardware instead of an IDE simulator.

Developers can automate and schedule s using the Device Farm API or via Jenkins using the AWS plugin. Like every AWS service, pricing is usage based, where the metric is the total time for each device at $0.17 per device minute, however by judiciously selecting the device pool, it’s much cheaper than buying and configuring the actual hardware. Developers can automate and schedule s using the Device Farm API or via Jenkins using the AWS plugin. Like every AWS service, pricing is usage based, where the metric is the total time for each device at $0.17 per device minute.

Along with Mobile Services for backend infrastructure, Device Farm makes a compelling mobile app development platform, particularly for organizations already using AWS for website and app development.

To learn more about AWS Device Farm or to get started on your Digital Marketing initiatives, contact us.

-2nd Watch blog by Kurt Marko

Facebooktwittergoogle_pluslinkedinmailrss

Introducing Amazon Aurora

When you think of AWS, the first thing that comes to mind is scalability, high availability, and performance. AWS is now bringing the same features to a relational database by offering Aurora through Amazon RDS. As announced at AWS re:Invent 2014, Amazon Aurora is a fully managed relational database engine that is feature compatible with MySQL 5.6. Though it is currently in preview, it promises to give you the performance and availability of high end commercially available databases with built-in scalability without the administrative overhead.

To create Aurora, AWS has taken a service-oriented approach, making individual layers such as storage, logging and caching scale out while keeping the SQL and transaction layers using the core MySQL engine.

So, let’s talk about storage. Aurora can automatically add storage volume in 10 GB chunks up to 64 TB. This eliminates the need to plan for the data growth upfront and manually intervene when storage needs to be added. This feature is described by AWS to happen seamlessly without any downtime or performance hit.

Aurora’s storage volume is automatically replicated across three AWS Availability Zones (AZ) with two copies of data in each AZ.  Aurora uses quorum writes and reads across these copies. It can handle the loss of two copies of data without affecting database write availability and up to 3 copies without affecting read availability. This SSD powered multi-tenant 10 GB storage chunks allows for concurrent access and reduces hot spots making it fault tolerant. It is also self-healing as it continuously scans data blocks for errors and repairs them automatically.

Unlike in a traditional database where it has to replay the redo log since last check point, which is single threaded in MySQL, Aurora’s underlying storage replays redo logs on-demand, in parallel, distributed and asynchronous mode. This allows you to recover from a crash almost instantaneously.

Aurora database can have up to 15 Aurora replicas. Since master and replica share the same underlying storage, you can failover with no data loss. Also, there’s very little load on the master since there’s no log replay, resulting in minimal replica lag of approximately 10 to 20 milliseconds. The cache lives outside of the database process which allows it to survive during a database restart. As a result, operations can be resumed much faster as there is no need to warm the cache. The backup is automatic, incremental and continuous. This enables point-in-time recovery up to the last five minutes.

A new added feature of Aurora is its ability to simulate failure of node, disk or networking components using SQL commands. This allows you to the high availability and scaling features of your application.

According to the Aurora FAQ, it is five times faster than a stock version of MySQL. Using Sysbench “Aurora delivers over 500,000 selects/sec and 100,000 updates/sec running the same benchmark on the same hardware.”

While Aurora may cost slightly more than the RDS for MySQL per instance based on the US-EAST-1, the features may justify it, and you only pay for the storage consumed at a rate of $0.10/GB per month and IOs at a rate of $0.20 per million requests.

It’s exciting to see the challenges of scaling the relational database being addressed by AWS. To learn more, you can sign up for a preview at http://aws.amazon.com/rds/aurora/preview/.

2nd Watch specializes in helping our customers with legacy investments in Oracle achieve successful migrations to Aurora.  Our Oracle-certified experts understand the specialized features, high availability and performance capabilities that proprietary components such as RAC, ASM and Data Guard provide and are skilled in delivering low-risk migration roadmaps.  We also offer schema and PL/SQL migration guidance, client tool validation, bulk data upload and ETL integration services. To learn more, contact us.

Ali Kugshia, Sr. Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss

Planning for Cost Management with Amazon Web Services

As firms progress through the transition from traditional IT to the AWS Cloud, there is often a moment of fear and anxiety related to managing cost. The integration and planning group has done an excellent job of selecting the right consulting partner. Contracts have been negotiated by legal and procurement. Capital funding has been allocated to cover the cost of application migrations. Designs are accepted and the project manager has laid out the critical path to success. Then at the last hour, just before production launch, the finance team chimes in – “How are we going to account for each application’s monthly usage?”

So much planning and preparation is put into integration, because we’ve gone through this process with each new application. But moving to the public cloud presents a new challenge, one that’s easily tackled with a well-developed model for managing cost in a usage-based environment.

AWS allows us to deploy IaaS (Infrastructure as a Service), and that infrastructure is shared across all of our applications in the cloud. With the proper implementation of AWS Resource Tags, cloud resources can be associated with unique applications, departments, environments, locations and any other category for which cost-reporting is essential.

Firms must have the right dialog in the design process with their cloud technology partner. Here’s an outline of the five phases of the 2nd Watch AWS Tagging Methodology, which has been used to help companies plan for cloud-based financial management:

Phase 1: Ask Critical Questions – Begin by asking Critical Questions that you want answered about utilization, spending and resource management. Consider ongoing projects, production applications, and development environments. Critical Questions can include: Which AWS resources are affecting my overall monthly bill? What is the running cost of my high availability features like warm standby, pilot light or geo-redundancy? How do architectural changes or application upgrades change my monthly usage?

Phase 2: Develop a Tagging Strategy – The Cloud Architect will interpret these questions and develop a tagging strategy to meet your needs. The strategy then turns into a component of the Detailed Design and later implemented during the build project. During this stage it’s important to consider the enforcement of standards within the organization. Configuration drift is when other groups don’t use standardized AWS Resource Tags, or those defined in a naming convention. Later when it’s time for reporting, this will create problems for accounting and finance.

Phase 3: Determine Which AWS Resources Are In Scope – Solicit feedback from your internal accounting department and application owners. Create a list of AWS Resources and applications that need to be accounted for. Refer frequently to AWS online documentation because the list of taggable resource types is updated often.

Phase 4: Define How Chargebacks and Showbacks Will Be Used – Determine who will receive usage-based reports for each application, environment or location. Some firms have adopted a Chargeback model in which the accounting team bills the internal groups who have contributed to the month’s AWS usage. Others have used these reports for Showback only, where the usage & cost data is used for planning, forecasting and event correlation. 2W Insight offers a robust reporting engine to allow 2nd Watch customers the ability to create, schedule and send reports to stakeholders.

Phase 5: Make Regular Adjustments For Optimization – Talk to your Cloud Architect about automation to eliminate configuration drift. Incorporate AWS tagging standards into your cloud formation templates. Regularly review tag keys and values to identify non-standard use cases. And solicit the feedback of your accounting team to ensure the reports are useful and accurate.

Working with an AWS Premier Consulting Partner is critical to designing for best practices like cost management. Challenge your partner and ask for real-world examples of AWS Resource Tagging strategies and cost reports. Planning to manage costs in the cloud is not a step that should be skipped. It’s critical to incorporate your financial reporting objectives into the technical design early, so that they can become established, standardized processes for each new application in the cloud.

For more information, please reach out to Zachary Bonugli zbonugli@2ndwatch.com.

– Zachary Bonugli, Global Account Manager

Facebooktwittergoogle_pluslinkedinmailrss

Reevaluate your Virtual Private Cloud (VPC)

VPCWith the New Year comes the resolutions. When the clock struck midnight on January 1st, 2015 many people turned the page on 2014 and made a promise to do an act of self-improvement. Often times it’s eating healthier or going to the gym more regularly. With the New Year, I thought I could put a spin on a typical New Year’s Resolution and make it about AWS.

How could you improve on your AWS environment? Without getting too overzealous, let’s focus on the fundamental AWS network infrastructure, specifically an AWS Virtual Private Cloud (VPC). An AWS VPC is a logically isolated, user controlled, piece of the AWS Cloud where you can launch and use other AWS resources. You can think of it as your own slice of AWS network infrastructure that you can fully customize and tailor to your needs. So let’s talk about VPCs and how you can improve on yours.

  • Make sure you’re using VPCs! The simple act of implementing a VPC can put you way ahead of the game. VPCs provide a ton of customization options from defining your own VPC size via IP addressing; to controlling subnets, route tables, and gateways for controlling network flow between your resources; to even defining fine-grained security using security groups and network ACLs. With VPCs you can control things that simply can’t be done when using EC2-Classic.
  • Are you using multiple Availability Zones (AZs)? An AZ is a distinct isolated location engineered to be inaccessible from failures of other AZs. Make sure you take advantage of using multiple AZs with your VPC. Often time instances are just launched into a VPC with no rhyme or reason. It is great practice to use the low-latency nature and engineered isolation of AZs to facilitate high availability or disaster recovery scenarios.
  • Are you using VPC security groups? “Of course I am.” Are you using network ACLs? “I know they are available, but I don’t use them.” Are you using AWS Identity and Access Management (IAM) to secure access to your VPCs? “Huh, what’s an IAM?!” Don’t fret, most environments don’t take advantage of all the tools available for securing a VPC, however now is the time reevaluate your VPC and see if you can or even should use these security options. Security groups are ingress and egress firewall rules you place on individual AWS resources in your VPC and one of the fundamental building blocks of an environment. Now may be a good time to audit the security groups to make sure you’re using the principle of least privilege, or not allowing any access or rules that are not absolutely needed. Network ACLs work at the subnet level and may be useful in some cases. In larger environments IAM may be a good idea if you want more control of how resources interact with your VPC. In any case there is never a bad time to reevaluate security of your environment, particularly your VPC.
  • Clean up your VPC! One of the most common issues in AWS environments are resources that are not being used. Now may be a good time to audit your VPC and take note of what instances you have out there and make sure you don’t have resources racking up unnecessary charges. It’s a good idea to account for all instances, leftover EBS volumes, and even clean up old AMIs that may be sitting in your account.  There are also things like extra EIPs, security groups, and subnets that can be cleaned up. One great tool to use would be AWS Trusted Advisor. Per the AWS service page, “Trusted Advisor inspects your AWS environment and finds opportunities to save money, improve system performance and reliability, or help close security gaps.”
  • Bring your VPC home. AWS, being a public cloud provider, allows you to create VPCs that are isolated from everything, including your on-premise LAN or datacenter. Because of this isolation all network activity between the user and their VPC happens over the internet. One of the great things about VPCs are the many types of connectivity options they provide. Now is the time to reevelautate how you use VPCs in conjunction with your local LAN environment. Maybe it is time to setup a VPN and turn your environment into a hybrid cloud and physical environment allowing all communication to pass over a private network. You can even take it one step further by incorporating AWS Direct Connect, a service that allows you to establish private connectivity between AWS and your datacenter, office, or colocation environment. This can help reduce your network costs, increase bandwidth throughput, and provide a more consistent overall network experience.

 

These are just a few things you can do when reevaluating your AWS VPC for the New Year. By following these guidelines you can gain efficiencies in your environment you didn’t have before and can rest assured your environment is in the best shape possible for all your new AWS goals of 2015.

-Derek Baltazar, Senior Cloud Engineer

Facebooktwittergoogle_pluslinkedinmailrss

Decommissioning Yamaha's Data Center

If you missed this breakout session at AWS re:Invent 2014, don’t miss this session recording. Learn how Yamaha and 2nd Watch migrated Yamaha’s data center to Amazon Web Services in this AWS re:Invent 2014 breakout session recording.

When Yamaha Corporation needed to reduce infrastructure cost, AWS was the solution. The following video talks about how Yamaha and 2nd Watch migrated mission-critical applications, configured Availability Zones for data replication, configured disaster recovery for Oracle E-Business Suite, and designed file system backups for Yamaha’s environment on AWS.

Information on AWS re:Invent

AWS re:Invent is a learning conference that offers 3 days of technical content so attendees can dive deeper into the AWS cloud computing platform. The event is ideal for developers, architects, and technical decision makers – as well as AWS partners, press, and analysts interested in cloud computing. The majority of the conference content is focused on technical deep dives for existing AWS customers, but there is also content covering new service announcements, overviews of existing services, and content for executive decision makers.

The next AWS re:Invent will be held October 5-9, 2015 at the Venetian Hotel in Las Vegas, NV. For more information click here: AWS re:Invent 2015

Facebooktwittergoogle_pluslinkedinmailrss

Amazon Updates Reserved Instances Model

In an effort to simplify the Reserved Instances (RI) model, AWS announced yesterday a change in the model based on customer feedback and purchasing patterns.

AWS will move from three types of RIs – Fixed Price: Heavy, Medium and Light Utilization RIs – to a single type with three payment options. All continue to provide capacity assurance and discounts when compared to On-Demand prices.

The three new payment options give you flexibility to pay for the entire RI upfront, a portion of the RI upfront and a portion over the term, or nothing upfront and the entire RI over the course of the term.

What does this mean for you? These changes will really benefit predictable workloads that are running >30% of the time.  In cases where usage is less consistent, it may be better for companies to stick with on-demand rates.  We’ve developed some related research on usage trends. Meanwhile, our role as a top AWS partner continues to be simplifying procurement of all AWS products and services.

Download the AWS Usage infographic

Read more about the new RI model.

Facebooktwittergoogle_pluslinkedinmailrss