Martin Ahrer

Thinking outside the box

OpenTelemetry Backend

2025-03-29 3 min read Martin

The previous post introduced OpenTelemetry and the OpenTelemetry collector (aka otel-collector) which is acting as gateway to the backends persisting emitted log, metrics, and trace data.

With this post we have a more detailed look at the collector’s configuration and how it connects to the various databases

We also discuss the used backend components

OpenTelemetry Collector

Direct Telemetry Storage vs. OpenTelemetry Collector

In a basic setup, we can send logs, metrics, and traces directly to their respective storage backends, making it a straightforward model for development environments or small-scale deployments:

  • VictoriaLogs → Stores application logs

  • VictoriaMetrics → Stores application metrics

  • Grafana Tempo → Stores trace data

This setup works well for simple use cases, but OpenTelemetry enables a more powerful approach by introducing the OpenTelemetry Collector as an intermediary.

Why Use the OpenTelemetry Collector?

Instead of sending telemetry data directly, the collector acts as a buffer and processor, allowing applications to offload data quickly. It provides:

  • Efficient Data Handling – Handles batching and retries, preventing data loss in case of temporary storage outages.

  • Performance Optimization – Reduces the load on application instances by offloading telemetry processing.

  • Data Enrichment – Enhances telemetry with additional metadata before exporting.

  • Security & Compliance – Supports encryption, anonymization, and filtering of sensitive data before forwarding.

  • Flexible Routing – Routes telemetry to multiple observability backends, ensuring vendor-agnostic compatibility.

This approach scales much better in production environments, improving observability while keeping applications lightweight and performant.

otel collector
Opentelemetry Collector container configuration
services:
  opentelemetry-collector:
    image: otel/opentelemetry-collector-contrib:0.118.0
    command: [--config=/etc/opentelemetry-collector.yml] # ? standard directory?
    volumes:
      - ./provisioning/opentelemetry-collector/opentelemetry-collector.yml:/etc/opentelemetry-collector.yml
    ports:
      - "4317:4317" # OTLP gRPC receiver
      - "4318:4318" # OTLP http receiver
      - "55679:55679" # zpages extension

Pipeline

Configuring the Collector basically means to build a pipeline that passes incoming telemetry data through a chain of components. A receiver is accepting data using a well known protocol, passing it on to multiple processors that can manipulate (filter, enrich) data. Finally, data is going through a set of exporters that distribute telemetry data to their respective backends.

Pipeline configuration
service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [attributes]
      exporters: [debug, otlphttp/victoria-logs]
    metrics:
      receivers: [otlp]
      processors: [attributes]
      exporters: [debug, otlphttp/victoria-metrics]
    traces:
      receivers: [otlp, zipkin]
      exporters: [debug, otlphttp/tempo]
  telemetry:
    logs:
      level: DEBUG

The pipeline connects all configured receivers, processors, and exporters.

Receiver

Receiver configuration
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

The receivers block configures the offered APIs for data ingestion. otlp configures the endpoints accepting telemetry data using the OTLP protocol. Here the OTLP receiver is accepting traffic through http and grpc.

For more receiver types look at the opentelemetry-collector-contrib project which has a fairly large set of more receivers (e.g. jaeger, prometheus).

A receiver will not be active until it is included in a pipeline.

Processor

Processor configuration
#https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor
processors:
  #https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/attributesprocessor
  attributes:
    actions:
      - key: env
        action: insert
        value: production

A processor is responsible for processing incoming telemetry data. This includes:

  • enriching

  • filtering (dropping), and

  • transforming

The above processor configuration is performing telemetry data enrichment by adding a key value pair env=production.

The opentelemetry-collector-contrib project offers more processor implementations.

Exporter

Exporter configuration
exporters:
  debug:
    verbosity: detailed

  otlphttp/victoria-logs:
    compression: gzip
    endpoint: http://victoria-logs:9428/insert/opentelemetry
    tls:
      insecure: true

  otlphttp/victoria-metrics:
    compression: gzip
    endpoint: http://victoria-metrics:8428/opentelemetry/
    tls:
      insecure: true

  otlphttp/tempo:
    endpoint: http://tempo:4318
    tls:
      insecure: true

The above export configuration contains the setup for the telemetry backends. I have chosen to use products that have implemented OTLP compliant endpoints. However, we can choose from a large set of more exporters from the opentelemetry-collector-contrib that offers plenty of exporters (e.g. prometheus, zipkin)

OpenTelemetry Alternatives

Having a standard now for telemetry we can now enjoy a variety of alternative offerings in the APM space.

For the OpenTelemetry Collector we need to mention