Martin Ahrer

Thinking outside the box

TDD for infrastructure with ignite + firecracker

2020-10-01 4 min read martin

This is the last post in the series of posts about TDD for infrastructure. If you wanted to follow along I recommend first reading TDD for infrastructure with Vagrant and then TDD for infrastructure with footloose.

In the previous post TDD for infrastructure with footloose I explored weaveworks' footloose to start containers that look like virtual machines. Containers do no quite cope with the isolation that a real virtual machine can provide. So in case we must use a real virtual machine we can go back to the approach with VirtualBox (or any other product) or investigate into other options.

Amazon has developed Firecracker at Amazon Web Services to improve the customer experience of services like AWS Lambda and AWS Fargate.

Firecracker is a virtual machine monitor (VMM) that uses the Linux Kernel-based Virtual Machine (KVM) to create and manage microVMs. Firecracker has a minimalist design. It excludes unnecessary devices and guest functionality to reduce the memory footprint and attack surface area of each microVM. This improves security, decreases the startup time, and increases hardware utilization. Firecracker currently supports Intel CPUs, with AMD and Arm support in developer preview.

— https://firecracker-microvm.github.io/

For easing operating firecracker VMs, weaveworks has added the ignite backend to footloose.

Ignite makes Firecracker easy to use by adopting its developer experience from containers. With Ignite, you pick an OCI-compliant image (Docker image) that you want to run as a VM, and then just execute ignite run instead of docker run. There’s no need to use VM-specific tools to build .vdi, .vmdk, or .qcow2 images, just do a docker build from any base image you want (e.g. ubuntu:18.04 from Docker Hub), and add your preferred contents.

— https://github.com/weaveworks/ignite

firecracker requires native Linux, so this time we need a system that meets the requirements as documented for running the next examples.

Create footloose hosts

The footloose configuration configures the ignite backend assigning an OCI image for the kernel but also sets some virtualized hardware parameters for the firecracker VM.

footloose configuration
cluster:
  name: ignite
  privateKey: ignite-footloose-key
machines:
- count: 3
  spec:
    image: weaveworks/ignite-ubuntu:latest
    name: node-%d
    portMappings:
    - containerPort: 22
      hostPort: 2222
    backend: ignite
    ignite: # Optional configuration parameters
      cpus: 1
      memory: 256M
      diskSize: 2GB
      kernel: "weaveworks/ignite-kernel:4.19.47"

Then using the configuration we create and start the firecracker machines.

Create footloose ignite machines
time sudo footloose create
INFO[0000] Creating machine: ignite-node-0 ...
INFO[0002] Creating machine: ignite-node-1 ...
INFO[0005] Creating machine: ignite-node-2 ...
real 7.57
user 0.61
sys 0.51

As you can see, creating all hosts is almost as fast as starting footloose Docker containers with the container image already present at the Docker host (see TDD for infrastructure with footloose).

Provision the hosts

With ignite and firecracker we are back to SSH for connecting Ansible to each host. So, again we adjust the inventory a bit for fitting the footloose ignite/firecracker machines. Below the configuration common for all hosts is shown. The ignite backend creates an SSH private/public key pair which we set up for Ansible.

inventory/ignite/group_vars/all.yml
ansible_connection: ssh
ansible_user: root
ansible_become: true
ansible_ssh_private_key_file: '../ignite-footloose-key'

It also binds the SSH port for each firecracker machine to a local port. So for each machine we configure Ansible to use its SSH socket address.

inventory/ignite/host_vars/ignite-node-0.yml
ansible_host: 127.0.0.1
ansible_port: '2222'

Finally with the updated Ansible inventory we can perform the provisioning step by preparing the hosts and running the playbooks.

Ansible targets require python
cd ansible
ansible --become -m raw -f 1\
    -a "ln -sf /bin/bash /bin/sh && apt-get update && apt-get install -y python3 gnupg2" \
    --inventory=inventory/ignite/hosts.yml all
Execute playbook
ansible-playbook -i inventory/ignite/hosts.yml ntp.yml docker.yml

Test the hosts

For each firecracker VM its SSH port has been bound to the host. We use this SSH socket address along with the SSH key to connect to the VM an run the Chef inspec tests.

Run all tests for all hosts
cd inspec
ports=( "2222""2223" "2224" )
for port in "${ports[@]}"; do
    inspec exec ntp.rb docker.rb  \
        --target ssh://root@localhost:$port \
        --key-files ../ignite-footloose-key
done

Wrapup

With this series of posts I have demonstrated various tools for implementing continuous delivery for infrastructure. By utilizing battle proven technology such as Docker (or generally containers) or lightning fast virtualization such as firecracker we can tremendously cut down time required for provisioning.