Martin Ahrer

Thinking outside the box

Spring Boot OpenTelemetry

2025-03-14 8 min read Martin

As applications and architectures become increasingly complex, monitoring application health is more critical than ever. Developers, support teams, and site reliability engineers need real-time insights into performance metrics, logs, and execution traces to ensure smooth operations.

Traditionally, achieving this required integrating multiple frameworks and tools for data collection, analysis, and visualization—often leading to fragmented solutions.

To address this challenge, APM (Application Performance Monitoring) providers and open-source contributors have collaborated to create the OpenTelemetry specification. This open standard helps unify and standardize telemetry data collection, making it easier to integrate and interoperate across different monitoring tools and platforms.

Spring Boot & OpenTelemetry

Spring Boot, one of the most widely used frameworks for building Java applications, has strong support for OpenTelemetry. With the introduction of Micrometer Tracing in Spring Boot 3, applications can now seamlessly instrument distributed traces using OpenTelemetry as the backend. Micrometer Tracing replaces Spring Cloud Sleuth, offering a more standardized and vendor-neutral approach to observability.

Developers can enable OpenTelemetry in Spring Boot applications by adding the necessary dependencies and configuring an OpenTelemetry exporter. This allows applications to send traces and metrics to observability backends such as Jaeger, Zipkin, or Prometheus with minimal effort. Additionally, Spring Boot’s native integration with Micrometer ensures that key application metrics are automatically collected, making it easier to monitor performance and troubleshoot issues.

To further enhance observability, OpenTelemetry provides compatibility with a wide range of telemetry collection and storage solutions. The OpenTelemetry Collector acts as a central component for receiving, processing, and exporting telemetry data to different backends. For metrics storage, VictoriaMetrics offers a high-performance time-series database, while VictoriaLogs provides efficient log storage and querying capabilities. For distributed tracing, Grafana Tempo enables scalable trace storage and analysis. Together, these components create a powerful and flexible observability stack that integrates seamlessly with OpenTelemetry, making it easier to monitor and analyze system performance in cloud-native environments.

By leveraging OpenTelemetry with Spring Boot and compatible observability tools, engineering teams can gain deeper insights into application behavior, improve troubleshooting, and ensure better performance and reliability across distributed systems.

Adding OpenTelemetry to a Spring Boot Application

Assuming we already have a Spring Boot Application we only need to add a few dependencies (using Gradle). I’m using an application that I use frequently for DevOps demos, it is publicly available at https://github.com/MartinAhrer/continuousdelivery and all the code is available there.

build.gradle
dependencies {
    implementation(platform(SpringBootPlugin.BOM_COORDINATES))

    // .. showing only open telemetry related dependencies

    implementation(libs.spring.boot.starter.actuator)
    implementation(platform(libs.opentelemetry.instrumentation.bom))
    implementation(libs.opentelemetry.exporter.otlp)
    implementation(libs.opentelemetry.spring.boot.starter)
    implementation(libs.micrometer.tracing.bridge.otel)
    implementation(libs.micrometer.registry.otlp)
}

I usually prefer to use Gradle’s version-catalog for managing dependencies, here is the TOML file.

libs.versions.toml
micrometer-tracing-bridge-otel = { module = "io.micrometer:micrometer-tracing-bridge-otel" }
micrometer-registry-otlp = { module = "io.micrometer:micrometer-registry-otlp" }

opentelemetry-exporter-otlp = { module = "io.opentelemetry:opentelemetry-exporter-otlp" }
opentelemetry-instrumentation-bom = { module = "io.opentelemetry.instrumentation:opentelemetry-instrumentation-bom", version = "2.12.0" }
opentelemetry-spring-boot-starter = { module = "io.opentelemetry.instrumentation:opentelemetry-spring-boot-starter" }

At this point, we can launch our Spring Boot application, and it will immediately begin emitting logs, metrics, and traces. By default, telemetry data is sent to http://localhost:4318, following the OpenTelemetry (OTEL) HTTP protocol, ensuring seamless integration with observability backends.

To enhance visibility into OpenTelemetry’s impact on logging, let’s update the logging configuration. The Spring Boot OpenTelemetry integration not only captures tracing information but also makes it available to the active logging framework—Logback in this case.

This is achieved through Mapped Diagnostic Context (MDC), which enriches log messages with trace context information. When properly configured, each log entry will include relevant trace identifiers, aiding in distributed tracing and root cause analysis. More details on MDC can be found in the Logback documentation.

By leveraging MDC, we ensure that logs remain correlated with traces, providing deeper insights into application behavior and facilitating effective debugging in distributed systems.

logback configuration
logging:
  pattern:
    level: "%5p [${spring.application.name:}:%X{traceId:-},%X{spanId:-}]"

As we are creating some activity on the running server we will see the logging output on the console.

Intellij HTTP client script
GET {{resource-server}}/api/companies
Accept: application/json


> {%
    client.test("Request executed successfully", function() {
        client.assert(response.status === 200, "Expected response status 200 but was: " + response.status);
    });
%}
###
025-03-15T09:49:46.577+01:00 DEBUG [continuousdelivery:3a2c8860430871099c88b51caec09dc8,dc3e6cd988c470c1] 3891 --- [continuousdelivery] [nio-8080-exec-4] [3a2c8860430871099c88b51caec09dc8-dc3e6cd988c470c1] t.p.B3PropagatorExtractorMultipleHeaders : Invalid TraceId in B3 header: null'. Returning INVALID span context.
2025-03-15T09:49:46.577+01:00 DEBUG [continuousdelivery:3a2c8860430871099c88b51caec09dc8,dc3e6cd988c470c1] 3891 --- [continuousdelivery] [nio-8080-exec-4] [3a2c8860430871099c88b51caec09dc8-dc3e6cd988c470c1] t.p.B3PropagatorExtractorMultipleHeaders : Invalid TraceId in B3 header: null'. Returning INVALID span context.
2025-03-15T09:49:46.577+01:00 DEBUG [continuousdelivery:3a2c8860430871099c88b51caec09dc8,feb63e381b1ed2e5] 3891 --- [continuousdelivery] [nio-8080-exec-4] [3a2c8860430871099c88b51caec09dc8-feb63e381b1ed2e5] o.s.web.servlet.DispatcherServlet        : GET "/api/companies", parameters={}

We can observe that the OpenTelemetry SDK is generating both a traceId and a spanId, which are essential for distributed tracing. However, we also notice log messages such as:

"Invalid TraceId in B3 header: null. Returning INVALID span context."

This indicates that the application is expecting a propagated trace context but is not receiving one in the incoming request.

OpenTelemetry allows injecting tracing information into client requests, enabling trace propagation across distributed services. Since our previous HTTP request did not include this tracing context, it resulted in a new, unlinked trace instead of continuing an existing one.

Next, let’s explore how we can explicitly propagate tracing information in outbound requests, ensuring proper trace continuity across service boundaries.

Intellij HTTP client script
GET {{resource-server}}/api/companies
Accept: application/json
traceparent: 00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-00

> {%
    client.test("Request executed successfully", function() {
        client.assert(response.status === 200, "Expected response status 200 but was: " + response.status);
    });
%}
###

The OpenTelemetry (OTEL) specification defines the traceparent HTTP headers. When present in incoming requests, this header allows the system to reuse existing trace context instead of generating new IDs. This is crucial for propagating traces across microservices, ensuring that all logs, metrics, and traces remain correlated.

Spring Boot’s OpenTelemetry integration is highly customizable, allowing us to tailor tracing behavior to our needs. To demonstrate this flexibility, let’s modify our application to return trace information in HTTP responses. This can be useful for diagnostics, enabling clients to inspect trace details for troubleshooting.

By exposing X-Trace-Id in responses, we provide an easy way for consumers to correlate their requests with logs and traces captured on the server. Let’s implement this next.

Adding the X-Trace-Id header to the response
@Component
public class TraceIdObservationFilter extends ServerHttpObservationFilter {

	private final Tracer tracer;

	public TraceIdObservationFilter(Tracer tracer, ObservationRegistry observationRegistry) {
		super(observationRegistry);
		this.tracer = tracer;
	}

	@Override
	protected void onScopeOpened(Observation.Scope scope, HttpServletRequest request,
								 HttpServletResponse response) {
		Span currentSpan = this.tracer.currentSpan();
		if (currentSpan != null) {
			response.setHeader("X-Trace-Id", currentSpan.context().traceId());
		}
	}
}

For showing the response’s X-Trace-Id header we run the request again with http (or curl)

http localhost:8080/api/companies
HTTP/1.1 200 host:8080/api/companies
Connection: keep-alive
Content-Type: application/hal+json
Date: Sat, 15 Mar 2025 09:06:22 GMT
Keep-Alive: timeout=60
Transfer-Encoding: chunked
Vary: Origin
Vary: Access-Control-Request-Method
Vary: Access-Control-Request-Headers
X-Trace-Id: 372f3fa9302740e4c4c03c234dc41071

For now the Spring Boot application configuration is done and we can look at the infrastructure that we need to build for receiving telemetry data.

OpenTelemetry Collector

Direct Telemetry Storage vs. OpenTelemetry Collector

In a basic setup, we can send logs, metrics, and traces directly to their respective storage backends, making it a straightforward model for development environments or small-scale deployments:

  • VictoriaLogs → Stores application logs

  • VictoriaMetrics → Stores application metrics

  • Grafana Tempo → Stores trace data

This setup works well for simple use cases, but OpenTelemetry enables a more powerful approach by introducing the OpenTelemetry Collector as an intermediary.

Why Use the OpenTelemetry Collector?

Instead of sending telemetry data directly, the collector acts as a buffer and processor, allowing applications to offload data quickly. It provides:

  • Efficient Data Handling – Handles batching and retries, preventing data loss in case of temporary storage outages.

  • Performance Optimization – Reduces the load on application instances by offloading telemetry processing.

  • Data Enrichment – Enhances telemetry with additional metadata before exporting.

  • Security & Compliance – Supports encryption, anonymization, and filtering of sensitive data before forwarding.

  • Flexible Routing – Routes telemetry to multiple observability backends, ensuring vendor-agnostic compatibility.

This approach scales much better in production environments, improving observability while keeping applications lightweight and performant.

otel collector

I’m going to use docker compose to setup a otel-collector.

services:
  opentelemetry-collector:
    image: otel/opentelemetry-collector-contrib:0.118.0
    command: [--config=/etc/opentelemetry-collector.yml] # ? standard directory?
    volumes:
      - ./provisioning/opentelemetry-collector/opentelemetry-collector.yml:/etc/opentelemetry-collector.yml
    ports:
      - "4317:4317" # OTLP gRPC receiver
      - "4318:4318" # OTLP http receiver
      - "55679:55679" # zpages extension
opentelemtry-collector configuration opentelemetry-collector.yml
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

#https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor
processors:
  #https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/attributesprocessor
  attributes:
    actions:
      - key: env
        action: insert
        value: production

exporters:
  debug:
    verbosity: detailed

  #otlphttp exporter
  #https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlphttpexporter/README.md

  otlphttp/victoria-logs:
    compression: gzip
    endpoint: http://victoria-logs:9428/insert/opentelemetry
    tls:
      insecure: true

  otlphttp/victoria-metrics:
    compression: gzip
    endpoint: http://victoria-metrics:8428/opentelemetry/
    tls:
      insecure: true

  #otlp exporter
  #https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlpexporter
  otlphttp/tempo:
    # tempo:4317 #gRPC
    endpoint: http://tempo:4318
    tls:
      insecure: true


service:
  telemetry:
    logs:
      level: DEBUG

  extensions:
    - zpages
  pipelines:
    logs:
      receivers: [otlp]
      processors: [attributes]
      exporters: [debug, otlphttp/victoria-logs]
    metrics:
      receivers: [otlp]
      processors: [attributes]
      exporters: [debug, otlphttp/victoria-metrics]
    traces:
      receivers: [otlp, zipkin]
      exporters: [debug, otlphttp/tempo]

extensions:
  zpages:
    endpoint: "0.0.0.0:55679"

When integrating OpenTelemetry, the OpenTelemetry Collector plays a crucial role in processing and routing telemetry data. Installing it introduces the following key components:

  • Receivers are responsible for ingesting telemetry data from various sources. These sources may include application instrumentation, agent-based monitoring, or third-party integrations.

  • Exporters handle forwarding telemetry data to OpenTelemetry-compliant observability backends. However, they can also bridge to systems that do not natively support OpenTelemetry, allowing seamless integration with existing monitoring solutions. Some common exporters configured include:

    • otlphttp/victoria-logs – Sends logs to VictoriaMetrics, a high-performance time-series database.

    • otlphttp/victoria-metrics – Exports metrics to VictoriaMetrics, enabling efficient storage and analysis.

    • otlphttp/tempo – Forwards tracing data to Grafana Tempo, a scalable distributed tracing backend.

  • Service & Pipelines: the OpenTelemetry Collector’s service component orchestrates data flow using pipelines that connect receivers, processors, and exporters.

    • Receivers collect raw telemetry data.

    • Processors filter, enrich, or transform the incoming telemetry before forwarding it.

    • Exporters deliver the processed data to observability platforms.

This architecture ensures high flexibility, allowing engineers to fine-tune telemetry processing based on application needs.

Wrapping Up In this article, we explored how simple and effective it is to set up OpenTelemetry, integrate it into a Spring Boot application, and configure the OpenTelemetry Collector to handle telemetry data efficiently.

With this foundation in place, we now have:

  • Tracing, metrics, and logs seamlessly flowing from our application.

  • The OpenTelemetry Collector processing and forwarding telemetry data.

  • A scalable observability architecture ready for further customization.

What remains is setting up the observability backends—VictoriaLogs, VictoriaMetrics, and Grafana Tempo—to store and visualize our logs, metrics, and traces. We’ll dive into that in our next post. Stay tuned!