Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

DEPLOYMENT

Patterns you can apply to deploy the OpenTelemetry collector. The OpenTelemetry collector consists of a single
binary which you can use in different ways, for different use cases. This section describes deployment patterns,
their use cases along with pros and cons and best practices for collector configurations for cross-environment and
multi-backend deployments.

No Collector
The simplest pattern is not to use a collector at all. This pattern consists of applications instrumented with an
OpenTelemetry SDK that export telemetry signals (traces, metrics, logs) directly into a backend:

Tradeoffs
Pros:
Simple to use (especially in a dev/test environment)
No additional moving parts to operate (in production environments)
Cons:
Requires code changes if collection, processing, or ingestion changes
Strong coupling between the application code and the backend
There are limited number of exporters per language implementation

AGENT
The agent collector deployment pattern consists of applications — instrumented with an
OpenTelemetry SDK using OpenTelemetry protocol (OTLP) — or other collectors (using the OTLP
exporter) that send telemetry signals to a collector instance running with the application or on the same
host as the application (such as a sidecar or a daemonset).

Each client-side SDK or downstream collector is configured with a collector location:

Tradeoffs
Pros:
Simple to get started
Clear 1:1 mapping between application and collector
Cons:
Scalability (human and load-wise)
Inflexible

GATEWAY
The gateway collector deployment pattern consists of applications (or other collectors) sending telemetry signals
to a single OTLP endpoint provided by one or more collector instances running as a standalone service (for
example, a deployment in Kubernetes), typically per cluster, per data center or per region.

In the general case you can use an out-of-the-box load balancer to distribute the load amongst the collectors:
For use cases where the processing of the telemetry data processing has to happen in a specific collector, you
would use a two-tiered setup with a collector that has a pipeline configured with the Trace ID/Service-name
aware load-balancing exporter in the first tier and the collectors handling the scale out in the second tier. For
example, you will need to use the load-balancing exporter when using the Tail Sampling processor so that all
spans for a given trace reach the same collector instance where the tail sampling policy is applied.

Let’s have a look at such a case where we are using the load-balancing exporter:

1. In the app, the SDK is configured to send OTLP data to a central location.
2. A collector configured using the load-balancing exporter that distributes signals to a group of collectors.
3. The collectors are configured to send telemetry data to one or more backends.

Note: Currently, the load-balancing exporter only supports pipelines of the traces type.

Tradeoffs
Pros:
Separation of concerns such as centrally managed credentials
Centralized policy management (for example, filtering certain logs or sampling)
Cons:
It’s one more thing to maintain and that can fail (complexity)
Added latency in case of cascaded collectors
Higher overall resource usage (costs)

You might also like