Martin Ahrer

Thinking outside the box

docker-compose scripting

2017-02-07 3 min read martin

This time we look at managing large numbers of compose projects.

When building complex infrastructure using docker-compose we soon end in a mess of scripts for starting, updating, etc. containers. I will try to describe an approach that has helped to get this done in a very structured way.

First I would put each compose-project into its own directory. For example these would be

.
├─ docker-elk
├─ docker-jenkins
├─ docker-nexus
├─ docker-sonarqube
│  ├─ .env

Each could be updated individually (by pulling the repository) and have its configuration in a local .env file.

In order to run docker-compose for any of the projects I would cd into the project directory and then run docker-compose up. For each project I would have to add the project name as argument docker-compose --project-name jenkins up or any other option required by compose. This starts to become hard to maintain. So my take on that is to put as much configuration into the local .env as well.

COMPOSE_PROJECT_NAME=sonarqube
COMPOSE_HTTP_TIMEOUT=300
COMPOSE_FILE=docker-compose.yml:docker-compose-backup.yml:docker-compose-postgresdb.yml

Doing so we can use a very simple docker-compose up command again and there is no need to think of the configuration a project might require.

Ok, so if we just wanted to bring up all projects we would have a script iterating each directory. Let’s do that.

Script compose.sh
#!/usr/bin/env bash
set -x

for f in ./docker-* ;
do
  pushd $f
  docker-compose $1 ${@:2}
  popd
done

With that script in place we can just compose.sh up -d and all project’s containers will be brought up.

How about supporting customized commands for more complex projects. Let’s look at the container startup for ELK (elasticsearch, logstash, kibana). This involves much more than just running a single docker-compose command.

Script ./docker-elk/up.sh
#!/usr/bin/env bash
set -x

function runDocker() {
    if [ ! -z "${DOCKER_MACHINE_NAME}" ] ; then
        docker-machine ssh ${DOCKER_MACHINE_NAME} "$1"
    else
        $1
    fi
}

: ${COMPOSE_PROJECT_NAME:=elk}
export COMPOSE_PROJECT_NAME
docker-compose pull
docker-compose build

network=$(docker network ls -q --filter name=elk)
if [ -z "${network}" ]; then
    docker network create elk
fi

docker-compose up -d elasticsearch logstash

logspout_cid=$(runDocker "docker ps -a -q --filter name=logspout")
if [ ! -z "${logspout_cid}" ]; then
    runDocker "docker stop ${logspout_cid}"
    runDocker "docker rm ${logspout_cid}"
fi
runDocker "docker run -d --name logspout --restart always --network elk -v /var/run/docker.sock:/var/run/docker.sock -e SYSLOG_FORMAT=rfc3164 gliderlabs/logspout:v3.1 syslog://logstash:51415"

docker-compose up -d kibana

Let’s extend compose.sh to support this kind of custom start up.

We are just adding a check if for a given command an equally named script is available in the project and execute it passing the remaining arguments.

Script compose.sh
#!/usr/bin/env bash
set -x

for f in ./docker-* ;
do
  pushd $f
  if [ -x "$1.sh" ]; then
    ./"$1.sh" ${@:2}
  else
    docker-compose $1 ${@:2}
  fi
  popd
done

So this is a very simple approach requiring scripting just where needed.