Martin Ahrer

Thinking outside the box

Hashicorp Nomad native workload

2023-02-25 3 min read martin

Earlier this week I presented my talk "Need something simpler than Kubernetes?" to the CNCF Linz community.

This post is a follow-up explaining how to deploy a native workload to Nomad. Nomad is unique because of its hybrid support for many types of workload. While Kubernetes has been designed for managing container-only workloads, Nomad’s pluggable task drivers allows scheduling container, Java, native (raw), QEMU/KVM, etc. workloads.

Today containers are fairly dominant and have become standard, like Kubernetes has become the first choice for managing container workload. Still, many of us have to maintain their legacy applications running on a Java Virtual Maching (JVM) and can’t or don’t want to package those in a container image. But maybe you have already moved on to the next big thing: Java applications converted into native images using GraalVM AOT compilation.

Whatever your situation is: wouldn’t it be awesome to have a single platform that caters for all these scenarios?

Hashicorp Nomad addresses all of those deployment formats. So we can run Java applications, containers and native applications side-by-side within a single workload orchestrator.

In one of my previous posts I explained how to deploy container workload to Nomad. Today I will show how to deploy the very same Spring Boot based Java application as a native application.

Spring Boot’s GraalVM native images

With Spring Boot 3.x recently added we have first class support for using GraalVM AOT compilation for converting any Spring Boot application into a native image.

Why would we want to do that? Boot a JVM based application takes quite some time. This may be an issue in a system where we quickly wanted to scale out and start additional instances of our workload. For example, the application we are using for deployment takes ~7 seconds when started as a JVM application. But when starting its native image we cut down startup time to ~0.4 seconds.

Deploying

Let’s now look at the deployment descriptor and the changes required to run a native workload that was previously scheduled as container.

Task driver docker

With a container engine like Docker available on a Nomad client, the Docker task driver can be activated and configured.

driver = "docker"
config {
    image = "${var.api_image_repository}:${var.api_image_tag}"
    auth {
        username = "${var.registry_auth_username}"
        password = "${var.registry_auth_password}"
    }
    ports = [
        "http"
    ]
    dns_servers = [
        "1.1.1.1",
        "1.0.0.1"
    ]
}
Task driver raw_exec

For deploying a native workload we just kick out the docker driver configuration and replace it by the exec or raw_exec configuration. Here the full path to the native binary is specified by a variable. In production we would likely use an artifact configuration for remotely downloading the binary.

driver = "raw_exec" (1)

config {
    command = "${var.raw_exec_command_dir}continuousdelivery"
}
1This deployment is configured on macOS so we have to use raw_exec rather than exec.
In a production environment it is recommended to use the exec driver which provides an isolated execution environment using chroot and cgroup which requires a Linux environment.

Reasons for using Nomad or Kubernetes

In my talk Need something simpler than Kubernetes? to the CNCF Linz community I did a brief comparison of Nomad and Kubernetes and also discussed why Nomad sometimes may be the better choice than Kubernetes.