A Few Gotchas About Going Multi-Cloud with AWS, Microsoft Azure and HashiCorp tools.

One of the more interesting types of work we do at Contino is help our clients make sense of the differences between AWS and Microsoft Azure. While the HashiCorp toolchain (Packer, Terraform, Vault, Vagrant, Consul and Nomad) have made provisioning infrastructure a breeze compared to writing hundreds of lines of Python, they almost make achieving a multi-cloud infrastructure deployment seem too easy.

This post will outline some of the differences I’ve observed with using these tools against both cloud platforms. As well, since I used the word “multi-cloud” in my first paragraph, I’ll briefly discuss some general talking points on “things to consider” before embarking on a multi-cloud journey at the end.

Azure and ARM Are Inseparable

One of the core features that make Terraform and Packer tick are providers and builders, respectively. These allow third-parties to write their own “glue” code that tells Terraform how to create VMs or Packer how to create machine images. This way, Terraform and Packer simply become “thin-clients” for your desired platform. HashiCorp’s recent move of moving provider code out of the Terraform binary in version 0.10 emphasizes this.

Alas, when you create VMs with Terraform or machine images with Packer, you’re really asking the AWS Golang SDK to do those things. This is mostly the case with Azure, with one big exception: the Azure Resource Manager, or ARM.

ARM is more-or-less like AWS CloudFormation. You create a JSON template of the resources that you’d like to deploy into a single resource group along with the relationships that should exist between those resources and submit that into ARM as a deployment. It’s pretty nifty stuff.

However, instead of Terraform or Packer using the Azure Go SDK directly to create these resources, they both rely on ARM through the Azure Go SDK to do that job for them. I’m guessing that HashiCorp chose to do it this way to avoid rework (i.e. “why create a resource object in our provider or builder when ARM already does most of that work?”) While this doesn’t have too many implications in how you actually use these tools against Azure, there are some notable differences in what happens at runtime.

Azure Deployments Are Slower

My experience has shown me that the Azure ARM Terraform provider and Packer builder takes slightly more time to “get going” than the AWS provider does, especially when using Standard_A class VMs. This can make testing code changes quite tedious.

Consider the template below. This uses a t2.micro instance to provision a Red Hat image with no customizations.

{
"description": "Basic RHEL image.",
"variables": {
"access_key": null,
"secret_key": null
},
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{ user `access_key` }}",
"secret_key": "{{ user `secret_key` }}",
"region": "us-east-1",
"instance_type": "t2.micro",
"source_ami": "ami-c998b6b2",
"ami_name": "test_ami",
"ssh_username": "ec2-user",
"vpc_id": "vpc-8a2dbbf2",
"subnet_id": "subnet-306b673c"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"#This is required to allow us to use `sudo` from our Packer provisioner.",
"#This is enabled by default on all RHEL images for \"security.\"",
"sudo sed -i.bak -e '/Defaults.*requiretty/s/^/#/' /etc/sudoers"
]
},
{
"type": "shell",
"inline": ["echo Hey there"]
}
]
}

Assuming a fast internet connection (I did this test with a ~6 Mbit connection), it doesn’t take too much time for Packer to generate an AMI for us.

$ time packer build -var 'access_key=REDACTED' -var 'secret_key=REDACTED' aws.json
==> amazon-ebs: Creating temporary security group for this instance: packer_5a136414-1ba5-7c7d-890c-697a8563d4be
==> amazon-ebs: Authorizing access to port 22 from 0.0.0.0/0 in the temporary security group...
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Adding tags to source instance
amazon-ebs: Adding tag: "Name": "Packer Builder"
...
amazon-ebs: Hey there
==> amazon-ebs: Stopping the source instance...
amazon-ebs: Stopping instance, attempt 1
==> amazon-ebs: Waiting for the instance to stop...
==> amazon-ebs: Creating the AMI: test_ami
amazon-ebs: AMI: ami-20ff765a
...
Build 'amazon-ebs' finished.

==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
us-east-1: ami-20ff765a

real 1m50.900s
user 0m0.020s
sys 0m0.008s

Let’s repeat this exercise with Azure. Here’s that template again, but Azure-ified:

{
"description": "Basic RHEL image.",
"variables": {
"client_id": null,
"client_secret": null,
"subscription_id": null,
"azure_location": null,
"azure_resource_group_name": null
},
"builders": [
{
"type": "azure-arm",
"communicator": "ssh",
"ssh_pty": true,
"managed_image_name": "rhel-{{ user `base_rhel_version` }}-rabbitmq-x86_64",
"managed_image_resource_group_name": "{{ user `azure_resource_group_name` }}",
"os_type": "Linux",
"vm_size": "Standard_B1",
"client_id": "{{ user `client_id` }}",
"client_secret": "{{ user `client_secret` }}",
"subscription_id": "{{ user `subscription_id` }}",
"location": "{{ user `azure_location` }}",
"image_publisher": "RedHat",
"image_offer": "RHEL",
"image_sku": "7.3",
"image_version": "latest"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"#This is required to allow us to use `sudo` from our Packer provisioner.",
"#This is enabled by default on all RHEL images for \"security.\"",
"sudo sed -i.bak -e '/Defaults.*requiretty/s/^/#/' /etc/sudoers"
]
},
{
"type": "shell",
"inline": ["echo Hey there"]
}
]
}

And here’s us running this Packer build. I decided to use a Basic_A0 instance size, as that is the closest thing that Azure has to a t2.micro instance that was available for my subscription. (The Standard_B series is what I originally intended to use, as, like the t2 line, those are burstable.)

Notice that it takes almost TEN times as long with the same Linux distribution and similar instance sizes!

$ packer build -var 'client_id=REDACTED' -var 'client_secret=REDACTED' -var 'subscription_id=REDACTED' -var 'tenant_id=REDACTED' -var 'resource_group=REDACTED' -var 'location=East US' azure.json
azure-arm output will be in this color.

==> azure-arm: Running builder ...
azure-arm: Creating Azure Resource Manager (ARM) client ...
==> azure-arm: Creating resource group ...
==> azure-arm: -> ResourceGroupName : 'packer-Resource-Group-s6sj74tdvk'
==> azure-arm: -> Location : 'East US'
...
azure-arm: Hey there
==> azure-arm: Querying the machine's properties ...
==> azure-arm: -> ResourceGroupName : 'packer-Resource-Group-s6sj74tdvk'
==> azure-arm: -> ComputeName : 'pkrvms6sj74tdvk'
==> azure-arm: -> Managed OS Disk : '/subscriptions/8bbbc92b-6d16-4eb2-8f95-7a0769748c8d/resourceGroups/packer-Resource-Group-s6sj74tdvk/providers/Microsoft.Compute/disks/osdisk'
==> azure-arm: Powering off machine ...
==> azure-arm: -> ResourceGroupName : 'packer-Resource-Group-s6sj74tdvk'
==> azure-arm: -> ComputeName : 'pkrvms6sj74tdvk'
==> azure-arm: Capturing image ...
==> azure-arm: -> Compute ResourceGroupName : 'packer-Resource-Group-s6sj74tdvk'
==> azure-arm: -> Compute Name : 'pkrvms6sj74tdvk'
==> azure-arm: -> Compute Location : 'East US'
==> azure-arm: -> Image ResourceGroupName : 'REDACTED'
==> azure-arm: -> Image Name : 'IMAGE_NAME'
==> azure-arm: -> Image Location : 'eastus'
<strong>==> azure-arm: Deleting resource group ...</strong>
==> azure-arm: -> ResourceGroupName : 'packer-Resource-Group-s6sj74tdvk'
==> azure-arm: Deleting the temporary OS disk ...
==> azure-arm: -> OS Disk : skipping, managed disk was used...
Build 'azure-arm' finished.

==> Builds finished. The artifacts of successful builds are:
--> azure-arm: Azure.ResourceManagement.VMImage:

ManagedImageResourceGroupName: REDACTED
ManagedImageName: IMAGE_NAME
ManagedImageLocation: eastus

<strong>real 10m27.036s
user 0m0.056s
sys 0m0.020s</strong>

The worst part about this is that it takes this long even when it fails!

Notice the “Deleting resource group…” line I highlighted. You’ll likely spend a lot of time looking at that line. For some reason, cleanup after an ARM deployment can take a while. I’m guessing that this is due to three things:

  1. Azure creating intermediate resources, such as virtual networks (VNets), subnets and compute, all of which can take time,
  2. ARM waiting for downstream SDKs to finish deleting resources and/or any associated metadata, and
  3. Packer issuing asynchronous operations to the Azure ARM service, which requires polling the operationResult endpoint every so often to see how things played out.

Pro-Tip: Use the az Python CLI before running things!

As recovering from Packer failures can be quite time-consuming, you might want to consider leveraging the Azure command-line clients to ensure that inputs into Packer templates are correct. Here’s quick example: if you want to confirm that the service principal client_id and client_secret are correct, you might want to add something like this into your pipeline:

#!/usr/bin/env bash
client_id=$1
client_secret=$2
tenant_id=$3

if ! az login --service-principal -u "$client_id" -p "$client_secret" --tenant "$tenant_id"
then
echo "ERROR: Invalid credentials." >&2
exit 1
fi

This will save you at least three minutes during exection…as well as something else that’s a little more frustrating.

The AWS provider and builder are more actively consumed

Both the AWS and Azure Terraform providers and Packer builders are mostly maintained internally by HashiCorp. However, what you’ll find out after using the Azure ARM provider for a short while is that its usage within the community pales in comparison.

I ran into an issue with the azure-arm builder whereby it failed to find a resource group that I created for an image I was trying to build. Locating that resource group with az groups list and the same client_id and secret worked fine, and I was able to find the resource group in the console. As well, I gave the service principal “Owner” permission, so there were no access limitations preventing it from finding this resource group.

After spending some time going into the builder source code and firing up Charles Web Proxy, it turned out that my error had nothing to do with resource groups! It turns out that the credentials I was passing into Packer from my Makefile were incorrect.

What was more frustrating is that I couldn’t find anything on the web about this problem. One would think that someone else using this builder would have discovered this before I did, especially after this builder having been available for at least 6 months since this time of writing.

It also seems that there are, by far, more internal commits and contributors to the Amazon builders than those for Azure, which seem to largely be maintained by Microsoft folks. Despite this disparity, the Azure contributors are quite fast and are very responsive (or at least they were to me!).

Getting Started Is Slightly More Involved on Azure

In the early days of cloud computing, Amazon’s EC2 service focused entirely on VMs. Their MVP at the time was: we’ll make creating, maintaining and destroying VMs fast, easy and painless. Aside from subnets and some routing details, much of the networking overhead was abstracted away. Most of the self-service offerings that Amazon currently has weren’t around, or at least not yet. Deploying an app onto AWS still required knowledge on how to set up EC2 instances and deploy onto them, which allowed companies like Digital Ocean and Heroku to rise into prominence. Over time, this premise seems to have held up, as most of AWS’s other offerings heavily revolve around EC2 in various forms.

Microsoft took the opposite direction with Azure. Azure’s mission statement was to deploy apps onto the cloud as quickly as possible without users having to worry about the details. This is still largely the case, especially if one is deploying an application from Visual Studio. Infrastructure-as-a-Service was more-or-less an afterthought, which led to some industry confusion over where Azure “fit” in the cloud computing spectrum. Consequently, while Microsoft added and expanded their infrastructure offerings over time, the abstractions that were long taken for granted in AWS haven’t been “ported over” as quickly.

This is most evident when one is just getting started with AWS and the HashiCorp suite for the first time versus starting up on Azure. These are the steps that one needs to take in order to get a working Packer image into AWS:

  1. Sign up for AWS.
  2. Log into AWS.
  3. Go to IAM and create a new user.
  4. Download the access and secret keys that Amazon gives you.
  5. Assign that user Admin privileges over all AWS services.
  6. Download the AWS CLI (or install Docker and use the anigeo/awscli image)
  7. Configure your client: aws configure
  8. Create a VPC: aws ec2 create-vpc --cidr-block 10.0.0.0/16
  9. Create an Internet Gateway: aws ec2 create-internet-gateway
  10. Attach the gateway to your VPC so that your machines can Internet: aws ec2 attach-internet-gateway --internet-gateway-id $id_from_step_9 --vpc-id $vpc_id_from_step_8
  11. Create a subnet: aws ec2 create-subnet --vpc-id $vpc_id_from_step_8 --cidr-block 10.0.1.0/24
  12. Update that subnet so that it can issue publicly accessible IP addresses to VMs created within it: aws ec2 modify-subnet-attribute --subnet-id $subnet_id_from_step_11 --map-public-ip-on-launch
  13. Download Packer (or use the hashicorp/packer Docker image)
  14. Create a Packer template for Amazon EBS.
  15. Deploy! `packer build -var ‘access_key=$access_key’ -var ‘secret_key=$secret_key’ your_template.json

If you want to understand why an AWS VPC requires an internet gateway or how IAM works, finding whitepapers on these topics is a fairly straightforward Google search.

Getting started on Azure, on the other hand, is slightly more laborious as documented here. Finding in-depth answers about Azure primitives has also been slightly more difficult, in my experience. Most of what’s available are Microsoft Docs entries about how to do certain things and non-technical whitepapers. Finding a Developer Guide like those available in AWS was difficult.

In Conclusion

Using multiple cloud providers is a smart way of leveraging different pricing schemes between two providers. It is also an interesting way of adding more DR than a single cloud provider can provide alone (which is kind-of a farce, as AWS spans dozens of datacenters across the world, many of which are in the US, though region-wide outages have happened before, albeit rarely.

HashiCorp tools like Terraform and Packer make managing this sort of infrastructure much easier to do. However, both providers aren’t created equal, and the AWS support that exists is, at this time of writing, significantly more extensive. While this certainly doesn’t make using Azure with Terraform or Packer impossible, you might find yourself doing more homework than initially expected!

About Me

IMG-0456-min

I’m a Technical Principal for Contino. We specialize in helping large and heavily-regulated enterprises make cloud adoption and DevOps culture a reality. I’m passionate about bringing DevOps to the enterprise. I’m also passionate about bikes, brews and travel!

Advertisements

Some Terraform gotchas.

So you’ve got a bacon delivery service repository with Terraform configuration files at the ready, and it looks something like this:


$> tree
.
├── main.tf
├── providers.tf
└── variables.tf

0 directories, 3 files

terraform is applying your configurations and saving them in tfstate like you’d expect. Awesome.

Eventually, your infrastructure scales just large enough to necessitate a directory structure. You want to express your Terraform configurations in a way that (a) makes it easy to see what’s in which environment, (b) makes it easy to modify those environments without affecting other environments and (c) prevents your HCL from becoming a total mess not much unlike if you were to do it with Puppet or Chef.

Fortunately, Terraform makes this pretty easy to do…but not without some gotchas.

<

h2>One suggestion: Use modules!

Modules give you the ability to reuse Terraform resources throughout your codebase. This way, instead of having a bunch of aws_instances lying around in your main, you can neatly express them in ways that make more sense:


module "sandbox-web-servers" {
  source = "../modules/aws/sandbox"
  provider = "aws.us-west-1"
  environment = "sandbox"
  tier = "web"
  count = 10
}

When you do this, you need to populate Terraform’s module cache by using terraform get /path/to/module.

<

h2>Gotcha #1: Self variable interpolation isn’t a thing yet.

If you noticed, the example above references “sandbox” quite a lot. This is because, unfortunately, Terraform modules (and resources, I believe) do not yet support self-referencing variables. What I mean is this:


module "sandbox-web-server" {
  environment = "sandbox"
  source = "../modules/${var.self.environment}"
  ...
}

Given that everything in Terraform is a directed graph, the complexity in doing this makes sense. How do you resolve a reference to a variable that hasn’t been defined yet?

This was tracked here, but it looks like a blue-sky feature right now.

Gotcha #2: Module source paths are relative to the module.

Let’s say you had a module definition that looked like this:


module "sandbox-web-servers" {
  source = "modules/aws/sandbox"
}

and a directory structure that looked like this:


$> tree
.
├── infrastructure
│   └── sandbox
│       └── web_servers.tf
└── modules
    └── aws
        └── sandbox
            └── main.tf

5 directories, 2 files

Upon running terraform apply, you’d get an awesome error saying that modules/aws/sandbox couldn’t be located, even if you ran it at the root. You’d wonder why this is given that Terraform is supposed to reference everything from the location from which the application was executed.

It turns out that modules don’t work that way. When modules are loaded with terraform get, their dependencies are sourced from the location of the module. I haven’t looked too deeply into this, but this is likely due to the way in which Terraform populates its graphs.

To fix this, you’ll need to either (a) create symlinks in all of your modules pointing to your module source, or (b) fix your sources to use relative paths relative to the location of the module, like this:


module "sandbox-web-servers" {
  source "../../modules/aws/sandbox"
  ...
}

Gotcha #3: Providers must co-exist with your infrastructure!

This one took me a few hours to reason about. Let’s go back to the directory structure referenced above (which I’ve included again below for your convenience):


$> tree
.
├── infrastructure
│   └── sandbox
│       └── web_servers.tf
└── modules
    └── aws
        └── sandbox
            └── main.tf

5 directories, 2 files

Since you deploy to multiple different sources (nit pick: Nearly every example I’ve seen on Terraform assumes you’re using AWS!), you want to create a providers folder to express this. Additionally, since your infrastructure might be defined differently by environment and you want the thing that’s actually calling terraform to assume as little about your infrastructure as possible, you want to break it down by environment. When I tried this, it looked like this:


.
├── infrastructure
│   └── sandbox
│       └── web_servers.tf
├── modules
│   └── aws
│       └── sandbox
│           └── main.tf
└── providers
    ├── openstack
    ├── colos
    ├── gce
    └── aws
        ├── dev
        │   ├── main.tf
        │   └── variables.tf
        ├── pre-prod
        │   ├── main.tf
        │   └── variables.tf
        ├── prod
        │   ├── main.tf
        │   └── variables.tf
        └── sandbox
            ├── main.tf
            └── variables.tf

14 directories, 10 files

You now want to reference this in your modules:


# infrastructure/sandbox/aws_web_servers.tf
module "sandbox-web-servers" {
  source = "../../modules/aws/sandbox"
  provider = "aws.sandbox.us-west-1" # using a provider alias
  ...
}

and are in for a pleasant surprise when you discover that Terraform fails because it can’t locate the “aws.sandbox.us-west-1” provider.

I initially assumed that when Terraform looked for the nearest provider, it would search the entire directory for a suitable one, in other words, it would follow a search path like this:


- ./infrastructure/sandbox
- ./infrastructure
- .
- ./modules
- ./modules/aws
- ./modules/aws/sandbox
- .
- ./providers
- ./providers/aws
- ./providers/aws/sandbox <-- here

But that’s not what happens. Instead, it looks for its providers in the same location as the module being referenced. This meant that I had to put providers.tf in the same place as aws_web_servers.tf.

I couldn’t even get away with putting it in the directory for its requisite environment above it (i.e. ./infrastructure/aws/sandbox) because Terraform doesn’t currently support object inheritance.

Instead of re-defining my providers in every directory, I created my providers.tf in every infrastructure environment folder I had (which is just sandbox at the moment) and symlinked it in every folder underneath it. In other words:


carlosonunez@DESKTOP-DSKP2VT:/tmp/terraform$ ln -s ../providers.tf infrastructure/sandbox/aws/providers.tf^C
carlosonunez@DESKTOP-DSKP2VT:/tmp/terraform$ ls -lart infrastructure/sandbox/aws/
total 0
-rw-rw-rw- 1 carlosonunez carlosonunez  0 Dec  6 23:52 web_servers.tf
drwxrwxrwx 2 carlosonunez carlosonunez  0 Dec  7 00:14 ..
drwxrwxrwx 2 carlosonunez carlosonunez  0 Dec  7 00:14 .
lrwxrwxrwx 1 carlosonunez carlosonunez 15 Dec  7 00:14 providers.tf -> ../providers.tf
carlosonunez@DESKTOP-DSKP2VT:/tmp/terraform$ tree
.
├── infrastructure
│   └── sandbox
│       ├── aws
│       │   ├── providers.tf -> ../providers.tf
│       │   └── web_servers.tf
│       └── providers.tf
├── modules
│   └── aws
│       └── sandbox
│           └── main.tf
└── providers
    ├── aws
    ├── colos
    ├── gce
    └── openstack
        ├── dev
        │   ├── main.tf
        │   └── variables.tf
        ├── pre-prod
        │   ├── main.tf
        │   └── variables.tf
        ├── prod
        │   ├── main.tf
        │   └── variables.tf
        └── sandbox
            ├── main.tf
            └── variables.tf

15 directories, 12 files

It’s not great, but it’s a lot better than re-defining my providers everywhere.

Gotcha #4: Unset your provider env vars!

So the thing in Gotcha #3 never happened to you. It seemed to deploy just fine. That is until you realized you were deploying to the production account instead of the dev, which you were abruptly informed of by Finance when they were wondering why you spun up $15,000 worth of compute. Oops.

This is because of a thoughtful-yet-conveniently-unfortunate side effect of providers whereby (a) most of them support using environment variables to define their behavior, and (b) Terraform has no way of turning this off (an issue I recently raised).

For now, unset boto, openstack, gcloud or whatever provider CLI tool you might be using before running terraform commands. That, or run them in a clean shell using /bin/sh

That’s it!

I’m really enjoying Terraform. I hope you are too! Do you have any other gotchas? Want to leave some feedback? Throw in a comment below!

About Me

20160408

I’m a DevOps consultant for ThoughtWorks, a software company striving for engineering excellence and a better world for our next generation of thinkers and leaders. I love everything DevOps, Windows, and Powershell, along with a bit of burgers, beer and plenty of travel. I’m on twitter @easiestnameever and LinkedIn at @carlosindfw.

Configuration management and provisioning are different.

Configuration management tools are used to repeatably and consistently system and application uniformity across clusters of systems at scale. Many of these tools achieve this in three ways: an intuitive command line interface, a lightweight and easily-readable domain-specific language and a comprehensive REST-based API to lower the barrier-to-entry for integrations with other tools. While open-source configuration management tools such as Chef, Ansible, Puppet and Salt have been increasing in popularity over the years, there are also enterprise-grade and regulator-friendly offerings available from vendors such as Dell, Microsoft, HP and BMC.

Configuration management tools are great at keeping a running inventory of existing systems and applications up-to-date. These tools are so good at this, in fact, that many systems administrators and engineers grow tempted into using them to deploy swaths of new systems and configure them shortly thereafter.

I’ve seen this play out at many companies that I’ve worked at. This would usually manifest into an Ansible deployment playbook or a Chef cookbook that eventually became “the” cookbook. The result has always been the same, and if I had to sum this pattern up into a picture, it would look something like this:

i-swear-this-worked-on-my-machine

Let me explain.

Complexity in simplicity.

One of the darling features of modern configuration management tools is its ability to express complex configuration states in an easily-readable way. This works well for creating an Ansible playbook to configure, say, an nginx instance, but begins to fall apart when trying to, say, provision the instances on which those nginx instances will be hosted and gets really ugly when you attempt to create relationships to deploy application servers with those web servers in trying to stage an environment.

Creating common templates for security groups or firewall templates, instances, storage and the like with configuration management tools outside of what they provide out of the box usually involves writing a lot of boilerplate beforehand. (For example, Ansible has a plugin for staging ebs volumes, but what if you want an EBS resource with specific defaults for certain web or application servers in certain regions? Prepare for a lot of if blocks.) Problems usually also crop up in passing metadata like instance IDs between resources. Because most of the actions done by configuration management tools are idempotent executions, the simple languages they use to to describe configurations don’t natively support variables. Storing metadata and using loops are usually done by breaking out of the DSL and into its underlying language. This breaks readability and makes troubleshooting more complicated.

Provisioning is hard.

Provisioning infrastructure is usually more complicated than configuring the software underlying that infrastructure in two ways.

The first complication arises from the complicated relationships between pieces of infrastructure. Expressing specific environmental nuances of a postgres installation is usually done by way of Chef cookbook attributes and flags. An example of this can be found here. Expressing the three different regions that the databases backing your web app need to be deployed to in a particular environment and the operating system images that those instances need to have will likely require separate layers of attributes: one for image mappings, another for instance sizing and yet another for region mapping. Expressing this in cookbooks gets challenging.

The second complication comes from gathering state. Unless your infrastructure is completely immutable (which is ideal), some state of your infrastructure in the status quo is required before deploying anything. Otherwise, you’ll be in for a surprise or two after you deploy the servers for that environment you thought didn’t exist. Tools like Terraform and AWS CloudFormation keep track of this state to prevent these situations from happening. Chef or Puppet, for example, do not. You can use built-in resources to capture this data and make decisions based on those results, but that puts you back into manipulating their DSLs to do things they weren’t intended to do.

Rollbacks are harder.

chef-provisioning and Ansible provisioning plugins do not support rolling back changes if something fails. This is problematic for three reasons:

  1. Inconsistent environments lead to increased overhead and (usually manual) sysadmin toil. Toil leads to technical debt, and debt leads to slower releases and grumpy teams. Nobody wants to have a grumpy team.
  2. In the cloud, nearly everything costs money. Resources that you deployed during tests that weren’t destroyed afterwards can add up to hefty surprises at the end of your billing cycle.
  3. Cookbook recipes and playbooks will need to account for these stale resources when executing their actions. This can lead to a more complicated codebase and a debt to pay back later on.

Provisioning tools such as Terraform, Cloudformation and Vagrant support rollback out of the box.

Use the right tools.

If you’re staring at a behemoth of a playbook for provisioning your stack or looking to make the move away from chef-provisioning, take a look at XebiaLabs awesome list of tools that make provisioning less complicated. CloudFormation is awesome at provisioning AWS infrastructure (unless you dislike JSON, in which case it is far from it), Vagrant is great at doing the same for physical infrastructure and Packer does a great job of building images as code.

Good luck!

About Me

20160408

I’m a DevOps consultant for ThoughtWorks, a software company striving for engineering excellence and a better world for our next generation of thinkers and leaders. I love everything DevOps, Windows, and Powershell, along with a bit of burgers, beer and plenty of travel. I’m on twitter @easiestnameever and LinkedIn at @carlosindfw.

Config management and cloud provisioning: There be dragons

So I’ve tried using configuration management to deploy infrastructure to two different clouds and learned this: whenever you think “it would be great if we could deploy to EC2 with Chef,” use CloudFormation or Terraform instead.

Why? Here are a few reasons that come to mind:

  • CloudFormation/Terraform is easier. Terraform YAML is nicer than CloudFormation JSON, but both are *way* easier than trying to shoehorn Jinja2 (Ansible) or chef-provisioning Ruby to do what you want. Like, hundreds of lines easier.

    I once tried to use Ansible to automate provisioning of Active Directory forests onto EC2. I had to create my own roles for handling AMI selection, security group CRUD operations, EBS provisioning, etc. The 2000+ lines of YAML I wrote to uphold all that bass ultimately became about 200 lines of ugly, yet functional, CloudFormation JSON.

    Yeah.
  • Built-in rollback is awesome. CloudFormation and Terraform both support some kind of rollback. Chef provisioning does as well with the :rollback action (I don’t think Ansible does; at least it didn’t when I used the EC2 plugin), but it’s not guaranteed.
  • I really liked the CloudFormation API. I haven’t tried Terraform’s CLI yet, but I would imagine that it’s just as awesome. aws cloudformation provides a lot of useful information that’s easy to action upon in a Chef recipe or Ansible play, especially given that both platforms have support for CloudFormation “built-in.” What’s better, the AWS SDKs have full support for CloudFormation as well, which means…
  • You’re not locked into anything. This was the biggest takeaway from my experiences using chef-provisioning or ansible-ec2. If you ever decide to move away from Chef or Ansible, you’ll need to port over your deployment code with it. Depending on the platform, this could take anywhere from hours to weeks.

    Not a problem with CloudFormation or Terraform. Perhaps you’ll need to change how your Chef shell resource behaves, but that’s a lot easier to deal with, in my opinion.

Using your config management solution to do it all is really attractive. It’s usually not a bad idea either. However, when it comes to cloud, tread carefully!

About Me

Carlos Nunez is a DevOps consultant for ThoughtWorks, a software company striving for engineering excellence and a better world for our next generation of thinkers and leaders. He loves everything DevOps, Windows, and Powershell, along with a bit of burgers, beer and plenty of travel.

Follow him on Twitter! @easiestnameever.