1-888-317-7920 info@2ndwatch.com

AWS CFT vs. Terraform: Advantages and Disadvantages

UPDATE:  AWS Cloudformation now supports YAML.  To be sure, this is a huge improvement over JSON in terms of formatting and use of comments.  This will also simplify windows and linux userdata scripts.  So for teams that are just starting with AWS and don’t need any of the additional benefits of Terraform, YAML would be the best place to start.  Existing teams will likely still have a cache of JSON templates that they will need to recreate and should consider whether the other benefits of Terraform warrant a move away from CFT.

If you’re familiar with AWS CloudFormation Templates (CFTs) and how they work but have been considering Terraform, this guide is for you.  This basic guide will introduce you to some of the advantages and disadvantages of Terraform in comparison to CFT to determine if you should investigate further and try it yourself.  If you don’t have a rudimentary familiarity with Terraform, head over to https://www.terraform.io/intro/index.html for a quick overview.

Advantages

Formatting – This is far and away the strongest advantage of Terraform.  JSON is not a coding language, and it shows.  It’s common for CFTs to be 3000 lines long, and most of that is just JSON braces and bracket.  Terraform has a simple (but custom) HCL for creating templates and makes it easy to document and comment your code.  Whole sections can be moved to a folder structure for design and clarity.  This makes your infrastructure feel a bit more like actual code.  Lastly, you won’t need to convert Userdata bash and PowerShell scripts to JSON only to deploy and discover you forgot one last escaping backslash.  Userdata scripts can be written in separate files exactly as you would write them on the server locally.  For an example, here’s a comparison of JSON to Terraform for creating an instance:

Instance in CFT


"StagingInstance": {
  "Type": "AWS::EC2::Instance",
  "Properties": {
    "UserData": {
      "Fn::Base64": {
        "Fn::Join": ["", [
          "#!/bin/bash -v\n",
          "yum update -y aws*\n",
          "yum update --sec-severity=critical -y\n",
          "yum install -y aws-cfn-bootstrap\n",
          "# download data and install file\n",
          "/opt/aws/bin/cfn-init -s ", {
            "Ref": "AWS::StackName"
          }, " -r StagingInstance ",
          "    --region ", {
            "Ref": "AWS::Region"
          },
          " || error_exit 'Failed to run cfn-init'\n"
        ]]
      }
    },
    "SecurityGroupIds": [{
      "Ref": "StagingSecurityGroup"
    }],
    "ImageId": {
      "Ref": "StagingAMI"
    },
    "KeyName": {
      "Ref": "InstancePrivateKeyName"
    },
    "InstanceType": {
      "Ref": "StagingInstanceType"
    },
    "IamInstanceProfile": {
      "Ref": "StagingInstanceProfile"
    },
    "Tags": [{
      "Key": "Name",
      "Value": {
        "Fn::Join": ["-", [
          "staging", {
            "Ref": "AWS::StackName"
          }, "app-instance"
        ]]
      }
    }],
    "SubnetId": {
      "Ref": "PrivateSubnet1"
    }
  }
}

Instance in Terraform


#
Create the staging instance
resource "aws_instance"
"staging" {
  ami = "${var.staging_instance_ami}"
  instance_type =
    "${var.staging_instance_type}"
  subnet_id =
    "${var.private_subnet_id_1}"
  vpc_security_group_ids = [
    "${aws_security_group.staging.id}"
  ]
  iam_instance_profile =
    "${aws_iam_instance_profile.staging.name}"
  key_name =
    "${var.instance_private_key_name}"
  tags {
    Name =
      "staging-${var.stack_name}-instance"
  }
  user_data = "${file("
  instances / staginguserdatascript.sh ")}"
}

Managing State – This is the second advantage for Terraform.  Terraform knows the state of the environment from the last run, so you can run “terraform plan” and see exactly what has changed with the items that Terraform has created.  With an update to a CFT, you only know that an item will be “Modified,” but not how.  At that point you’ll need to audit the modified item and manually compare to the existing CFT to determine what needs to be updated.

Multi-Provider Support – Depending on how you utilize AWS and other providers, this can be a very big deal.  Terraform gives you a centralized location to manage multiple providers.  Maybe your DNS is in Azure but your servers are in AWS.  You could build an ELB and update the Azure DNS all in the same run.  Or maybe you want to update your AWS infrastructure and also update your DataDog monitoring too.  If you needed a provider they didn’t have, you could presumably add it since the code is open source.

Short learning curve – While they did introduce custom formatting for Terraform templates, the CFT and API nomenclature is *mostly* preserved.  For example, when creating an instance in CFT you need an InstanceType and KeyName. In Terraform this is instance_type and key_name.  Words are separated by underscores and all lowercase.  This makes it somewhat easy to migrate existing CFTs.  All told, it took about a day of experimentation with Terraform to feel comfortable.

Open Source – The general terraform tool is open source, which brings all the good and bad to the table that you normally associate with open source.  As mentioned previously, if you have GoLang resources, the world is your oyster.  Terraform can be made to do whatever you want it to do, and adding back to the repository will enhance it for everyone else.  You can check out the git repo to see that it has pretty active development.

Challenges

Cost – The free version of Terraform is free, but the enterprise version is expensive.  Of course the enterprise version adds a lot of bells and whistles, but I would recommend doing a serious evaluation to determine if they are worth the cost.

No Rollback – Rolling back a CFT deployment or upgrade is sometimes a blessing and sometimes a curse, but with CFT at least you have an option.  With Terraform, there is never an automatic rollback.  You have to figure out what went wrong and plow forward, or first rollback your code then re-deploy.  Either way it can be messy.  However, rollback for AWS CFT can be messy too.  Especially when changes are introduced that make CFT deployment and reconfiguration incompatible.  This invariably leads to the creation of an AWS support ticket to make adjustments to the CFT that is not possible otherwise.

CFT is “tightly coupled” with AWS, while Terraform is not.  This is the YANG to the open source YIN.  Amazon has a dedicated team to continue to improve and update CFTs.  They won’t just focus on the most popular items and will have access to internal resources to vet and prove out their approach.

Conclusion

While this article only scratches the surface of the differences between utilizing AWS CFT and Terraform, it provides a good starting point when evaluating both.  If you’re looking for a better “infrastructure as code,” state management, or multi-provider support, Terraform is definitely worth a look.  We are here to help our customers, so if you need help developing a cloud-first strategy, contact us here.

-Coin Graham, Sr Cloud Consultant, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

A Step-by-Step Guide on Using AWS Lambda-Backed Custom Resources with Amazon CFTs

Amazon CloudFormation Template (CFT) custom resources allow for additional flexibility in building Amazon environments. These can be utilized in a number of ways to enhance automation and make deployment easier. Generally speaking you’ll want to engage custom resources when you need information that is normally not available to the CloudFormation Template in order to complete the processing of the template. This could be a Security Group in another account, or the most updated AMI, or a Spot Price analysis. Additionally it’s useful for creating functionality in CFT that doesn’t exist yet, like verifying that the database size you’ve chosen is valid, or checking if you have a valid password (our example). You won’t want to use it for anything “one-off” as it takes time to develop and process. You will also want to avoid using it for long running processes since AWS CloudFormation will timeout if the internal processes take too long.

To give you an easy example of how this is setup, we’re going to build an AWS Lambda-backed custom resource that will verify that a password is correct by having you type it in twice. If the passwords you type don’t match, the CFT will quit processing and rollback. This is a bit of functionality that’s missing from AWS CFT and can be very frustrating once your environment is deployed and you realize you fat fingered the password parameter. The basic areas we’ll be focusing on are AWS CloudFormation and AWS Lambda. This guide assumes you’re familiar with both of these already, but if you’re not, learn more about AWS Lambda here or AWS CFTs here.

You want to start with the CFT that you’re looking to add the custom resource to and make sure it is functional. It’s always best to start from a place of known good. Adding a Lambda-backed custom resource to a CFT consists of four basic parts:

1. IAM Role for Lambda Execution: This is the role that will be assigned to your Lambda function. You will utilize this role to give the lambda permissions to other parts of AWS as necessary. If you don’t need to add any permissions, just create a role that allows Lambda to write its logs out.

"LambdaExecutionRole": {
  "Type": "AWS::IAM::Role",
  "Properties": {
    "AssumeRolePolicyDocument": {
      "Version": "2012-10-17",
      "Statement": [{
        "Effect": "Allow",
        "Principal": {
          "Service": ["lambda.amazonaws.com"]
        },
        "Action": ["sts:AssumeRole"]
      }]
    },
    "Policies": [{
      "PolicyName": "lambdalogtocloudwatch",
      "PolicyDocument": {
        "Version": "2012-10-17",
        "Statement": [{
          "Effect": "Allow",
          "Action": ["logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"],
          "Resource": "arn:aws:logs:*:*:*"
        }]
      }
    }]
  }
}

2. The Lambda Function: There are two ways of introducing your Lambda function into your CFT. If it is small, you can embed your function directly into the CFT by using the “ZipFile”
option under the “Code” property of the “AWS::Lambda::Function” resource. Or you can use the “S3Bucket” option and reference an S3 bucket that has your code already present in zip
format. Note that if you use the S3 bucket option the user that deploys the CFT will need permissions to read from the bucket, not the Lambda function. Next you’ll set your “Handler,”
“Runtime,” “Timeout,” and “Role” (which should reference the ARN of the role you created previously). If you are using the ZipFile option, your handler is the default for the runtime.

"CheckPasswordsFunction": {
  "Type": "AWS::Lambda::Function",
  "Properties": {
    "Code": {
      "ZipFile": {
        "Fn::Join": ["\n", [
          "var response = require('cfn-response');",
          "exports.handler = function(event, context) {",
          " if (event.RequestType == 'Delete') {",
          " response.send(event, context, response.SUCCESS);",
          " return;", " }",
          " var password = event.ResourceProperties.Password;",
          " var confpassword = event.ResourceProperties.ConfirmPassword;",
          " var responseData = {};",
          " if (password == confpassword) {",
          " responseData = {'passwordcheck': 'Password Valid!'};",
          " response.send(event, context, response.SUCCESS, responseData);",
          " } else {",
          " responseData = {Error: 'Passwords do not match'};",
          " console.log(responseData.Error);",
          " responseData = {'passwordcheck': 'Password Invalid!'};",
          " response.send(event, context, response.FAILED, responseData);",
          " }", "};"
        ]]
      }
    },
    "Handler": "index.handler",
    "Runtime": "nodejs",
    "Timeout": "30",
    "Role": {
      "Fn::GetAtt": [
        "LambdaExecutionRole", "Arn"
      ]
    }
  }
}

3. The Lambda Callout: The Lambda callout is where you pass the variables from the CFT to your Lambda function. It’s important to name these appropriately and consider what effect case and naming conventions will have on the runtime you’re using. The “Service Token” property is the ARN of the Lambda function you just created and the rest of the properties are the variables you’re passing through.

"TestPasswords": {
  "Type": "Custom::LambdaCallout",
  "Properties": {
    "ServiceToken": {
      "Fn::GetAtt": [
        "CheckPasswordsFunction",
        "Arn"
      ]
    },
    "Password": {
      "Ref": "Password"
    },
    "ConfirmPassword": {
      "Ref": "ConfirmPassword"
    }
  }
}

4. The Response: There are two key parts of the response from the custom resources and this applies to non-Lambda custom resources too. The first is the “Status” of the response. If you return a status of “FAILED,” the CFT will short circuit and rollback. If you return a status of “SUCCESS,” then the CFT will continue to process. This is important because sometimes you’ll want to send SUCCESS even if your lambda didn’t produce the desired result. In the case of our PassCheck, we wanted to stop the CFT from moving forward to save time. Knowing at the end that the password were mismatched would not be very valuable. The second important piece of the response is the “Data.” This is how you pass information back to CloudFormation to process the result. You’ll set the “Data” variable in your code as a json and reference the json key/value pair back inside the CFT. You’ll use the “Fn::GetAtt” option to reference the Lambda callout you created previously and the key of the json data you’re interested in.

"Outputs": {
  "Results": {
    "Description": "Test Passwords Result",
    "Value": {
      "Fn::GetAtt": ["TestPasswords",
        "passwordcheck"
      ]
    }
  }
}

As far as your Lambda function is concerned, you may or may not need to reference variables sent from the CloudFormation Template. These variables will be in the “event”->”ResourceProperties” dictionary/hash. For example:

NodeJs

var password = event.ResourceProperties.Password

Python

password = event['ResourceProperties']['Password']

And similarly, once you’re function is completed processing you might need to send a response back to the CFT. Thankfully AWS has created some wrappers to make the response easier. For nodejs it is called “cfn-response” but is only available when using the “ZipFile” option. There is a similar package for Python, but you’ll need to bundle it with your Lambda and deploy from S3. Sending information back from your Lambda is as easy as setting the “Data” variable to a properly formatted json file and sending it back.

...
if (password == confpassword) {
responseData = {'passwordcheck': 'Password Valid!'};
response.send(event, context, response.SUCCESS, responseData);
...

That’s it. Creating a Lambda-backed custom resource can add all kinds of additional functionality and options to your CloudFormation Templates. Feel free to download the whole CFT here and it out or use it to learn more, or Contact Us at 2nd Watch to help in getting started.

-Coin Graham, Sr Cloud Engineer, 2nd Watch

Facebooktwittergoogle_pluslinkedinmailrss

"Taste the Feeling" of Innovation

Coca-Cola North America Information Technology Leads the Pack

On May 4th 2016, Coca-Cola North America Information Technology and Splunk Inc. (NASDAQ: SPLK), provider of the leading software platform for real-time Operational Intelligence, announced that Coca-Cola North America Information Technology was named to this year’s InformationWeek Elite 100, a list of the top business technology innovators in the United States. Coca-Cola North America Information Technology, a Splunk customer, is being honored for the company’s marketing transformation initiative.

Coca-Cola North America Information Technology division is a leader in migrating to the cloud and leveraging cloud native technologies.  The division re-architected its digital marketing platform to leverage cloud technology to create business insights and flexibility and to take advantage of scale and innovations of the public cloud offered by Amazon Web Services.

“The success you see from our digital marketing transformation is due to our intentional focus on innovation and agility as well as results, our team’s ingenuity and our partnership with top technology companies like Splunk,” said Michelle Routh, chief information officer, Coca-Cola North America. “We recognized a chance for IT to collaborate much more closely with the marketing arm of Coca-Cola North America to bring an unparalleled digital marketing experience to our business and our customers. We have moved technologies to the cloud to scale our campaigns, used big data analytics, beacon and Internet of Things technologies to provide our customers with unique, tailored experiences.”

Coca-Cola North America Information Technology is one of the most innovative customers we have seen today. They are able to analyze data that was previously not available to them through the use of Splunk® Enterprise software. Business insights include trending flavor mixes, usage data and geographical behavior on its popular Freestyle machines to help improve fulfillment and marketing offers.

We congratulate both Coca-Cola North America Information Technology division and Splunk for being named in InformationWeek Elite 100! Read more

-Jeff Aden, EVP Strategic Business Development & Marketing, Co-Founder

Facebooktwittergoogle_pluslinkedinmailrss

Why agile is fragile and how MSPs can help

Midmarket and enterprise companies looking to transform their IT operations to new models based on the public cloud and Agile/DevOps have a long, arduous journey. Moving from internally-managed IT departments with predictable needs, to ones which must be flexible and run on-demand is one of the grea paradigm shifts for CIOs today.

It requires new skills and new ways of working including a fundamental reorganisation of IT organisations. Meanwhile, IT must continue with business as usual, supporting core systems and processes for productivity and operations.

Many companies can’t get there fast enough, which is why the market for service providers specialising in public cloud infrastructure and DevOps is growing. A new crop of MSPs that focus specifically on public cloud infrastructure has appeared in the last few years to address the specific needs of public cloud as it relates to migration, legacy systems, integration, provisioning and configuration, security and financial management.

IT organisations are moving on from ing and prototyping to launching production applications in the cloud; there is often not enough time to ramp up quickly in the new capabilities needed for success. Here’s a look at how MSPs can ease the pain of enterprise public cloud and DevOps initiatives:

Network management:
In the public cloud, network management is handled differently due to the differences in the actual network environment and the fact that you just don’t have the same level of visibility and control as you do in your own data center. MSPs can help by lending a hand of expertise in building and managing secure networks in the public cloud.

Without the help of an MSP your business will need to do their own homework on how the network works and what tools are effective on the security side of things: heads up, it’s a much different list than the traditional data center. What do you get out of the box? When do you need third-party software to help? MSPs have years of experience running production workloads in the public cloud and can help you make the right decisions the first time without going through an exhaustive discovery phase.

Design and architecture:
Deploying systems into the cloud requires a mental shift, due to the elastic nature of virtual resources. This reinvents infrastructure design, since instances come and go according to demand and performance needs. IT needs to understand how to automate infrastructure changes according to shifting requirements and risks, such as hardware failures and security configurations. Experienced service providers that have helped companies migrate to the cloud over and again can deliver best practices and reduce risks.

Workflow:
Cloud and DevOps go hand-in-hand due to the joint requirements of frequent iteration, rapid change and continuous integration/development. The processes and tools for CI and CD are still emerging. Doing this well requires not only new, collaborative workflows but working with unfamiliar technologies such as containers.

While AWS has released a new service for managing containers, that’s just one piece of the puzzle. Many companies moving toward DevOps benefit from outside help in training, planning, measuring results and navigating internal barriers to change. Lastly, the automation infrastructure itself (Puppet, Chef, others) requires maintenance and is critical in the security landscape. An MSP can help build and manage this infrastructure so that you can focus on your code.

Security:
Security in the cloud is a shared responsibility. Many customers incorrectly assume that because public cloud providers have excellent security records and deep compliance frameworks for PCI and other regulations, that their infrastructure is secure by default. The reality is that providers do an excellent job of securing the underlying infrastructure but that is where things stop for them and begin for you as a customer.

Most security issues found in the public cloud today relate to misconfigurations.track configuration changes and validate architectural designs against them. In DevOps, rapid development processes may inadvertently trump security, and using containers and micro-services to speed deployment also introduces security risks. Missteps in the area of security can be long and costly to fix later; an MSP can help mitigate that risk through upfront design and ongoing monitoring and management.

Provisioning and cost management:
Virtual sprawl is no myth. IT teams that for years have used over-provisioning as a stopgap measure to ensure uptime may struggle to adapt to a different approach using on-demand infrastructure. Experts can help make that transition through proper provisioning at the outset as well as applying spend management tools built for the cloud to monitor and predict usage.

One of the best features of public cloud providers is high elasticity, the ability to spin up large amounts of virtual instances at a moment’s notice and then shut them off when you are done using them. The trick here is to remember to shut them off: many development teams claim to work 24×7 but the reality is usually much different. An MSP can set up cost alerting and monitoring and can even leverage tools to help you allocate costs to your heavy users or business units.

Legacy systems:
Large companies often want to move legacy systems to the public cloud to reduce the costly overhead of storage and maintenance. Yet no CIO wants to be accountable for migrating a mission-critical legacy system which later doesn’t perform well or is out of compliance.

Service providers can help evaluate whether a system can be migrated as is, “lift and shift,” or needs to be reconfigured to run in the cloud. CIOs may lean toward handling this task with their internal teams, yet doing so will likely take longer and require significant retraining of staff. There’s also the need to pay close attention to compliance. Experienced MSPs can help navigate financial regulations (Sarbanes-Oxley, PCI), privacy laws (HIPAA) and data management regulations in some sectors that go against the grain of DevOps.

Operations management:
Most IT infrastructure managers whom have been around for a while are well-versed in VMware-specific tools such as Vsphere. Yet unfortunately, most of those operational tools made to support virtualisation software don’t work well, or at all, in the public cloud. There are some cloud-native management tools available now, including those from AWS, yet none of them are clear winners yet.

IT departments are stuck with patching together their own toolsets or developing them from scratch, such as Netflix has done. That’s not always the best use of time and money, depending on your sector. MSPs can take over the operations management function altogether. Customers benefit through the continual learning on industry best practices that the service provider must undertake to effectively manage dozens or hundreds of customers.

Culture clash:
As with any disruptive technology, people are the biggest barrier to change. While human beings are highly adaptable, many of us simply are not comfortable with change. Take a hard look at not just skills but your culture. Do you have the type of organisation where people are willing and able to adapt without threatening to quit? If not, using the services of an MSP might be the path of least friction. Some organisations simply want the benefits of new technologies without needing to understand nor manage every nook and cranny.

Beyond all the above advantages, the MSP partner helps IT organisations move faster by serving as a knowledgeable extension of the IT department. CIOs and their teams can focus on serving the business and its evolving requirements, while the MSP helps ensure that those requirements transition well to the public cloud. Executives who have decided that the public cloud is their future and that DevOps is the way to get there are progressive thinkers whom are unafraid to take risks.

Yet that doesn’t mean they should go it alone. Find a partner who you can trust, and move toward your future with an experienced team propping you up all the way. The old adage of “you won’t get fired for hiring <name legacy service provider>” has now changed to “My new MSP got me promoted.”

-Kris Bliesner, CTO

This article was first published on ITProPortal on 11/25/15.

Facebooktwittergoogle_pluslinkedinmailrss

What’s the Plan? A How-To Guide for Preparing Your Cyber Incident Response Program

Last week, we kicked off a four-part blog series with our strategic partner, Alert Logic, that has a focus on the importance of cloud security for Digital Businesses.  This week, Alert Logic has contributed the following blog post as a guide to help digital businesses prepare for—and respond to—cyber incidents.

Evaluating your organization’s cyber security incident response readiness is an important part of your overall security program. But responding to a cyber security incident effectively and efficiently can be a tremendous challenge for most. In most cases, the struggle to keep up during an incident is due to either of the following:

  • The cyber incident response plan has been “shelf-ware” for too long
  • The plan hasn’t been practiced by the incident response team.

Unfortunately, most organizations view cyber incident response as a technical issue—they assume that if a cyber incident response plan is in place and has been reviewed by the “techies,” then the plan is complete. In reality, all these organizations have is a theoretical cyber incident response plan, one with no ing or validation. Cyber incident response plans are much more than a technical issue. In the end, they are about people, process, communication, and even brand protection.

How to ensure your cyber incident response plan works

The key to ensuring your cyber incident response plan works is to practice your plan. You must dedicate time and resources to properly the plan. Cyber incident response is a “use or lose” skill that requires practice. It’s similar to an athlete mastering a specific skill; the athlete must complete numerous repetitions to develop muscle memory to enhance performance. In the same way, the practice (repetitions) of ing your cyber incident response plan will enhance our team’s performance during a real incident.

Steps for ing your plan effectively

 Step 1: Self-Assessment and Basic Walk-Through

An effective methodology to your cyber incident response plan begins with a self-assessment and simple walk-through of the plan with limited team members.  Steps should include:

  1. The incident response manager reads through the plan, using the details of a recent data breach to follow the plan. The manager also identifies how the incident was discovered as well as notification processes.
  2. The team follows the triage, containment, eradication, and forensics stages of the plan, identifying any gaps.
  3. The incident response manager walks through the communications process along the way, including recovery and steady-state operations.
  4. The team documents possible modifications, follow-up questions, and clarifications that should be added to the plan.

Step 2: All Hands Walk-Through

The next step to a self-assessment is the walk-through with the entire incident response team. This requires an organized meeting in a conference room and can take between 2-4 hours, in which a scenario (recent breach) is used to walk through the incident response document. These working sessions are ideal to fill in the gaps and clarify expectations for things like detection, analysis, required tools, and resources.  Organizations with successful incident response plans will also include their executive teams during this type of .  The executive team participation highlights priorities from a business and resource perspective and is less focused on the technical aspects of the incident.

Step 3: Live Exercise

The most important step in evaluating your incident response plan is to conduct a live exercise.  A live exercise is a customized training event for the purpose of sharpening your incident response teams’ skills in a safe, non-production environment. It isn’t a penetration ; it’s an incident response exercise designed to your team’s ability to adapt and execute the plan during a live cyber attack.  It’s essentially the equivalent to a pre-season game—the team participates, but it doesn’t count in the win/loss column.  The value of a live exercise is the plan evaluation and team experience. The lessons learned usually prove to be the most valuable to the maturation of your cyber incident response plan.

Ultimately, preparedness is not just about having an incident response plan; it’s about knowing the plan, practicing the plan, and understanding it’s a work in progress. The development of an excellent incident response plan includes involvement and validation from the incident response team as well as a commitment to a repetitive cycle of practice and refinement.

Learn more about 2W Managed Cloud Security and how our partnership with Alert Logic can ensure your environment’s security.

Article contributed by Alert Logic

AlertLogic_Logo_2C_RGB_V_Tag

Facebooktwittergoogle_pluslinkedinmailrss

dreamBIG continues!

On Friday our day was packed with unpacking the supplies we brought from 2W, setting up the laptops we donated, greeting women, and holding babies!  We brought 4 suitcases packed with supplies for the school and children’s home and 4 new computers.  We spent time setting up the computers with parental controls and showing them how to use the computers.  We also took new books to the library in the school and spent time speaking English to the kids in the school.

supplies 3  supplies 2  supplies 1

computers

The women’s conference kicked off and we welcomed 90 women.  They worshiped and learned about the Lord and had amazing fellowship.  Six of us were responsible for all 50 kids while the Mamas went to the women’s conference.  Man was that an adventure!  These kids have so much energy!  We took them to the open-air covered sports arena on the grounds and played soccer (futbol) and tag.  We ended the night with a traditional Guatemalan meal together with the women and let off lanterns over the mountain and lit sparklers!! It was the perfect ending to a perfect day!

AL w ladies  women singing

boys n kb  soccer  annica w kid 1   

lanterns 4  sparklers

Saturday, was the final day of the women’s conference.  We continued with more messages, workshops, imonies, & worship.  Again, we helped with the children and allowed the Mamas to have a much needed/well deserved break.  We played futbol & basketball with the big kids, and painted the little girls nails.

IMG_7197 IMG_7196  nails 2

Today is our final day at Eagle’s Nest.  We will attend their church and then start our journey down to Pana, 15 minutes and 2,000 feet below Eagle’s Nest, for some zip lining and a boat tour on Lake Atitlan.

We are very sad to leave our new friends and the Eagles Nest family.

Facebooktwittergoogle_pluslinkedinmailrss