The Monitor

Ingest OTLP metrics directly into Datadog with the new OTLP Metrics API

3 minute read

Published

Share

Ingest OTLP metrics directly into Datadog with the new OTLP Metrics API
Connor Ward

Connor Ward

Many organizations rely on OpenTelemetry (OTel) to standardize observability across distributed systems. These organizations are at varying stages of adoption and are implementing OTel in complex environments with diverse configurations. To support this range of use cases, Datadog offers many ways to use OpenTelemetry with Datadog. With the launch of the Datadog Distribution of the OpenTelemetry Collector (DDOT) and the semantic convergence between OTel and Datadog metric naming conventions, Datadog has made it easy for customers to bring their OTel data to the platform.

As more customers adopt modern, cloud-native architectures, there's a growing need to bring in OTel data from serverless and third-party SaaS service environments, where collectors can't be deployed. Today, we're expanding our OTel support by enabling the direct ingestion of OTLP metrics from these environments with the new Datadog OTLP Metrics API. This API expands your observability options by making it easier to capture and analyze telemetry data from serverless environments, cloud-provider managed OTel distributions, and applications emitting metrics in OTLP.

In this post, we’ll look at how the OTLP Metrics API enables you to:

Send metrics from cloud-provider managed OTel collectors to Datadog

Cloud-provider managed OTel distributions, such as the AWS Distro for OpenTelemetry (ADOT) and the Azure Monitor OpenTelemetry Distro, only emit metrics in OTel format. This makes the Datadog OTel ingest API the simplest way to get OTLP metrics from these collectors to Datadog.

With the new Datadog OTLP Metrics API capability, you can configure the OTel HTTP Exporter in managed collectors to point directly to Datadog’s endpoint. This enables you to ingest these metrics seamlessly, expanding your visibility across managed OTel environments.

Send OTLP metrics from OTLP-native applications to Datadog

Many modern serverless and AI platforms—including Azure Functions, Google Cloud Functions, Cloudflare Workers, Vercel, and Anthropic’s Claude Code—export metrics in OTLP format by default. These environments often lack support for collectors, making it challenging to get metrics into Datadog.

The OTLP Metrics API solves this by providing a direct ingestion path. You can now send OTLP metrics from these platforms directly into Datadog without deploying a collector or the Datadog Agent. This enables you to monitor your serverless workloads alongside all of your other telemetry data in Datadog. For example, an application emitting OTLP metrics can post directly to the Datadog endpoint by using the OTel SDK, making this data immediately observable within your existing Datadog dashboards and alerts.

Get started with the OTLP Metrics API

The OTLP Metrics API joins Datadog’s existing ingestion methods—including DDOT, Datadog’s OTel Exporter, the upstream OpenTelemetry Collector, and the Datadog Agent—to enable you to monitor metrics from serverless apps, managed collectors, and modern runtimes that don’t support traditional exporters. This new API rounds out Datadog's OTel API support, which also includes the OTel Logs API and the OTel Traces API (currently in Preview). Together, these options provide a flexible framework for routing OTLP metrics into Datadog.

By consolidating this data with your existing metrics, traces, and logs, you can monitor the health of your systems, correlate issues across services, and ensure visibility into every environment where your workloads run. To learn more, see the OTLP Metrics Intake Endpoint documentation. Or, if you’re new to Datadog, .

Related Articles

Instrument your Azure Container Apps workloads with the new Datadog Agent sidecar

Instrument your Azure Container Apps workloads with the new Datadog Agent sidecar

A deep dive into Java garbage collectors

A deep dive into Java garbage collectors

Build secure and scalable Azure serverless applications with the Well-Architected Framework

Build secure and scalable Azure serverless applications with the Well-Architected Framework

Evolving our real-time timeseries storage again: Built in Rust for performance at scale

Evolving our real-time timeseries storage again: Built in Rust for performance at scale

Start monitoring your metrics in minutes