Continuous delivery infrastructure as code
This is part 1 of a series of posts covering Docker in a Continuous Delivery environment.
Today I’m showing how simple it is to setup a continuous delivery build pipeline infrastructure using Docker. In an upcoming post we will look at Jenkins pipeline as code creating Docker images and running integration tests against Docker containers. The series will close with an article explaining how we can move all containers built throughout this series of posts in a Docker swarm environment.
docker-compose scripting
This time we look at managing large numbers of compose projects.
When building complex infrastructure using docker-compose we soon end in a mess of scripts for starting, updating, etc. containers. I will try to describe an approach that has helped to get this done in a very structured way.
docker-compose modularization
In this blog post we are looking into how we can create modular compose projects.
With docker-compose we can describe a bunch of containers and container related resources such as networks and volumes that make up an application. All this is usually going into a docker-compose.yml file.
Bean Mapping of Transfer Objects
In the past years I have been working on multiple projects where the so-called Data Transfer Object (short DTO) pattern has been heavily used. This is a pattern that has even been a core pattern in the JEE world. This pattern certainly has its justification for the right cases. But in many cases I have seen it applied inappropriately. This blog posting by Adam Bien, a JEE advocate, is outlining the case where it should be considered useful. However when applied, this pattern comes at the cost of additional code to maintain and some extra CPU cycles for doing the mapping.
export DOCKER_BUILDKIT=1
docker image build -t martinahrer/ansible:alpine -f Dockerfile_ansible .
docker image build -t martinahrer/footloose:alpine -f Dockerfile_footloose .
docker image build -t martinahrer/inspec:alpine -f Dockerfile_inspec .cd src/main/
drone exec --trusted drone.io/.drone.ymltest step.nomad agent -config server.hcl nomad agent -config client1.hcl nomad agent -config client2.hcl nomad agent -config client3.hcl open http://localhost:4646
The agents have no explicit consul configuration. So in case consul is running with default settings, nomad will connect using the defaults (https://www.nomadproject.io/docs/configuration/consul).