Martin Ahrer

Thinking outside the box

TDD for infrastructure with footloose

2020-09-01 6 min read martin

In the previous post TDD for infrastructure with Vagrant I explored Vagrant, VirtualBox and Chef Inspec for implementing a simple workflow for test driven development of infrastructure code. I showed that using virtual machines as target machines for testing is just not fast enough and probably hard to run in a build environment.

Today I will explore the options of targeting a container for provisioning and testing.

One attempt could be to start off from one of the base images provided by e.g. the Docker hub. We have base images for many popular Linux distributions. But these have been stripped down to include only the essentials required for running a container. What we really need is a container image the resembles a full blown Linux installation so we get a typical host environment including e.g. a service manager. Sure we can just create such images ourselves adding everything back we eventually require.

It will turn out that we don’t have to do this, some folks at weaveworks have already done this with their footloose project.

footloose creates containers that look like virtual machines. Those containers run systemd as PID 1 and a ssh daemon that can be used to login into the container. Such "machines" behave very much like a VM, it’s even possible to run dockerd in them :)

— https://github.com/weaveworks/footloose

footloose is a bit like LXD (from Canonical) but also works for Docker container images and on MacOS. It offers multiple backends, docker, and ignite. the latter is a frontend for firecracker KVM based machines.

With this said, I take another iteration of provisioning hosts and testing them. As we are done with that iteration we are running everything inside containers and I will also be able to how simple it is to setup a build pipeline for fully automated testing the provisioning code with drone.io.

Create footloose hosts

footloose can be installed to Linux or macOS, just follow https://github.com/weaveworks/footloose#install. Very much like docker-compose, footloose has a CLI and a YAML based configuration file for managing container state.

Let’s use footloose to generate an initial configuration with footloose config create.

footloose config create \
    --replicas 3 \ (1)
    --image quay.io/footloose/debian10 \ (2)
    --privileged (3)
1Create a set of 3 hosts (replicas)
2This is one of the base container images provided for footloose.
3We require the privileged mode as we will run docker in a docker container.

The above command created the following footloose configuration .footloose/footloose-docker.yaml

cluster:
  name: docker
  privateKey: docker-footloose-key
machines:
- count: 3
  spec:
    image: quay.io/footloose/debian10:0.6.3
    name: node-%d
    portMappings:
    - containerPort: 22
# need privileged so we can run docker in docker
    privileged: true

Then using the configuration we create and start the container machines with footloose create.

time footloose create -c footloose/footloose-docker.yaml
INFO[0000] Image: quay.io/footloose/debian10:0.6.3 present locally
INFO[0000] Creating machine: docker-node-0 ...
INFO[0001] Creating machine: docker-node-1 ...
INFO[0003] Creating machine: docker-node-2 ...
footloose create -c footloose/footloose-docker.yaml  1.77s user 1.12s system 58% cpu 4.962 total

As you can see creating all hosts took roughly 5 seconds (the container images were already present at the Docker host an no pull was required). Remember the time it took creating and starting the Vagrant based virtual machines (see TDD for infrastructure with Vagrant)? Fast, isn’t it?

Provision the hosts

As the hosts are now alive, we can proceed with provisioning. While with Vagrant we relied on SSH for connecting to the hosts, with containers things get much easier. Ansible has a driver for connecting to containers which replaces all SSH configuration. So we adjust the inventory a bit for fitting the footloose containers. Below the configuration that is common for all hosts is shown.

inventory/footloose/group_vars/all.yml
ansible_connection: docker
ansible_user: root
ansible_become: true

With virtual machines managed by Vagrant we had to deal with SSH socket addresses, with footloose containers we can just use container names as Ansible host names. Below the configuration for docker-node-0 is shown.

inventory/footloose/host_vars/docker-node-0.yml
ansible_host: docker-node-0

With the updated Ansible inventory we can perform the provisioning step by preparing the hosts and running the playbooks. Again we install the prerequisites as we did with the Vagrant setup.

If we wanted to avoid repeating this everytime we recreate the host, we just would have to create a container image based on e.g. quay.io/footloose/debian10 and use it in the footloose configuration we created earlier. This would reduce the time for provisioning.

cd ansible
ansible --become -m raw -f 1\
    -a "ln -sf /bin/bash /bin/sh && apt-get update && apt-get install -y python3 gnupg2" \
    --inventory=inventory/footloose/hosts.yml all

Finally we run the playbooks for the footloose inventory.

ansible-playbook -i inventory/footloose/hosts.yml ntp.yml docker.yml

Test the hosts

Chef inspec also has a driver for connecting to Docker containers. This allows to target a host under test using a container name. The follwoing runs tests for all hosts.

cd inspec
for n in $(seq 0 2); do
    inspec exec ntp.rb docker.rb --target docker://docker-node-${n}
done

Build pipeline

With having containers in place as test targets, it gets really simple to run a fully automated build in a build pipeline. I have chosen to use drone.io as it allows running a pipeline without requiring to setup a build server. We only have to install the drone CLI as documented in https://docs.drone.io/cli/install/.

.drone.yml pipeline descriptor
kind: pipeline
type: docker
name: tdd-infrastructure

.step_template: &step_definition
    volumes:
        -   name: docker
            path: /var/run/docker.sock

steps:
    -   name: create-nodes
        image: martinahrer/footloose:alpine
        commands:
            - footloose create -c footloose/footloose-docker.yaml
        <<: *step_definition

    -   name: provision
        image: martinahrer/ansible:alpine
        commands:
            - cd ansible
            - ansible-galaxy install --role-file requirements.yml
            - ansible --module-name raw -a "ln -sf /bin/bash /bin/sh && apt-get update && apt-get install -y python3 gnupg2" --inventory=inventory/footloose/hosts.yml --become all
            - ansible-playbook -i inventory/footloose/hosts.yml ntp.yml docker.yml
        <<: *step_definition

    -   name: test
        image: martinahrer/inspec:alpine
        commands:
            - cd inspec
            - for n in $(seq 0 2); do inspec exec docker.rb ntp.rb --target docker://docker-node-0 --chef-license=accept-silent; done
        <<: *step_definition

    -   name: delete-nodes
        image: martinahrer/footloose:alpine
        commands:
            - footloose delete -c footloose/footloose-docker.yaml
        when:
            status:
                - failure
                - success
        <<: *step_definition

volumes:
    -   name: docker
        host:
            path: /var/run/docker.sock

For running the pipeline locally we just use the drone CLI. All we need is running the complete tool chain (footloose, ansible, inspec) as containers. For this I have prepared container images for all of the tools that are required.

drone exec --trusted drone.io/.drone.yml

So basically now you can run that in any build as a service infrastructure that allows creating containers.

Wrapup

I have shown how simple it is to create containers that look like virtual machines. With only some minor changes to the Ansible inventory, Ansible playbooks and Chef inspec tests ran without requiring changes. Compared to the earlier attempt based on Vagrant, execution time for creating the target hosts has been cut tremendously. While firing up virtual machines took minutes, with footloose this turned into a matter of seconds.

I’m going to close this series with a final post trying to achieve the same once more. With that post I will again try with virtual machines, with KVM based firecracker virtual machines.