Logging is here!
Introducing logs in Datadog

Introducing logs in Datadog

/
Published: November 21, 2017

As any software engineer knows, logs are a critical resource for troubleshooting complex problems in production. They provide rich, open-ended context about exactly what happened, when, and where, to help you quickly determine the why. But working with logs can be a pain, to say the least, whether you rely on targeted use of grep or whether you’re running your own systems for log storage and management.

To make logs easy to collect, inspect, explore, and monitor, we are excited to announce that powerful log processing and management is now available in Datadog as a beta feature.

With this addition, we are bringing together in one platform all three pillars of observability: metrics, distributed tracing, and logs. By uniting these data types and making it possible to seamlessly pivot between them, we aim to make it easy to derive a deep, clear understanding of your infrastructure, applications, and the businesses they represent.

Exploring your logs

The new Log Explorer page is an extremely efficient entry point for troubleshooting by browsing, searching, and inspecting your logs. You can explore your most recent logs or jump to a historical time window, just like you can with timeseries metrics in Datadog. Full-text search (including with Datadog tags and wildcards) enables you to narrow down the scope to a particular log entry or type of log entry.

You can use facets to quickly filter your logs and drill down from a global view to a pinpoint view of any part of your infrastructure or applications. A facet is a pre-indexed attribute that can be used to easily query and aggregate over your logs. Facets are derived either from fields processed out of log content (such as severity or HTTP status code), or from tags associated with your infrastructure (such as host name or environment). So in just a few clicks, for instance, you can pull up all the logs from a particular service, localized to a specific region, for errors that have occurred in the past 15 minutes.

Raw and processed log messages in Datadog.

Each log entry displays the raw log message, as well as the structured data extracted via customizable pipelines. You can then add any of those structured data fields to the Log Explorer view to filter or pivot the table of log entries.

Unifying the views

Tagging has always been a core part of Datadog—it’s what allows you to quickly filter, aggregate, and analyze your metrics by any dimension of your infrastructure. Those same tags are automatically attached to the logs that are collected and stored in Datadog. That means that in a single click, you can jump from a perplexing change on a timeseries graph to a pre-filtered collection of logs from that exact segment of your infrastructure at that same moment in time.

Clicking through from a log entry to distributed tracing and application performance metrics in Datadog APM.

You can also move in the other direction, jumping from a log entry to a dashboard of resource metrics from that particular host, or to distributed tracing and APM from that particular service.

Viewing host-specific logs in the Datadog host map.
View logs from a particular combination of host and technology (here, MongoDB) in the host map.

You can add log streams to your dashboards and notebooks, so you can view filtered logs alongside relevant metrics from any service or application. You can also use the Datadog host map or infrastructure list to zoom in on logs from a particular technology running on a particular host.

Log collection and processing

Collecting logs and sending them to Datadog works the same way as sending timeseries metrics or request spans for distributed request traces: via our lightweight, open source Agent. Out of the box, the Datadog Agent natively collects logs from popular technologies like Apache, NGINX, HAProxy, IIS, Java, and MongoDB. Datadog also supports log collection from a number of AWS cloud services.

We are continually adding new log formats to make it easier to get rapid insights, but you can also direct the Agent to read from custom log files on your hosts or receive logs that are streamed over TCP or UDP. You can use pattern-matching rules to redact sensitive information from your logs before sending them to Datadog.

Whether your logs come from a built-in integration or from a custom log source, they pass through customizable processing pipelines that parse and enrich your logs. Using our Pipelines page, you can apply filters to determine which logs should pass through a given pipeline, then define a series of stepwise parsers that extract meaningful information or attributes from semi-structured text so you can use them as facets in the Log Explorer or in targeted log queries.

Log processing pipelines in Datadog.
Apply custom rules to filter, process, and enrich logs from any source.

Get started

Logs in Datadog are now in public beta as we continually add new log integrations and features to the platform. Among the major enhancements in development are powerful log analytics capabilities and log-based alerting, which we look forward to sharing with our users very soon.

To see how you can get unprecedented clarity and depth of coverage by unifying your logs, metrics, and distributed request traces in one platform, register for the beta here.


Want to write articles like this one? Our team is hiring!