As your infrastructure and applications scale, so does the volume of your observability data. Managing a growing suite of tooling while balancing the need to mitigate costs, avoid vendor lock-in, and maintain data quality across an organization is becoming increasingly complex. With a variety of installed agents, log forwarders, and storage tools, the mechanisms you use to collect, transform, and route data should be able to evolve and adjust to your growth and meet the unique needs of your team. The freedom to experiment with new vendor tools without disrupting your production workflows is necessary to stay adaptable in the ever-evolving observability landscape.
Datadog’s managed Log Pipelines and Observability Pipelines can add flexibility to your existing workflow and help you assess new vendor solutions so that you can continue optimizing your log management.
Datadog’s fully managed Log Pipelines includes support for Log Forwarding and doesn’t require you to deploy or host any new infrastructure. With managed Log Pipelines, you can easily customize your data ingestion, processing, and routing workflows to centralize log data from your entire stack, and then forward the data to the necessary platforms without requiring any additional resource overhead. The minimal barriers to entry make managed Log Pipelines uniquely positioned to help you get set up as quickly as possible to evaluate new vendors.
Alternatively, Observability Pipelines is an on-premise solution that gives you complete granular control over your observability data and is helpful for workflows that process high volumes of data. You will have more visibility into your logs and the entire routing process—from source to destination—so you can identify and intervene if there are any bottlenecks in the process. Observability Pipelines also allows you to redact any sensitive data before it leaves your environment, so you can meet your data residency and compliance requirements.
In this post, we will discuss how you can use managed Log Pipelines or Observability Pipelines to send the same logs to multiple destinations, evaluate new vendors, and have more flexibility and control over long-term, high-volume log management.
Using Datadog’s managed Log Pipelines or Observability Pipelines allows you to send the same data to multiple endpoints or destinations, also known as dual shipping. Dual shipping provides you with the flexibility to experiment with new log management solutions, data formats, or routing workflows.
Datadog’s managed Log Pipelines and Observability Pipelines can collect logs from any source, ingest them into Datadog Log Management, parse and enrich them, process them, and forward them to the destinations of your choice. For example, let’s say you want to evaluate whether Datadog’s Flex Logs solution is the right fit for your log management needs. With Datadog’s pipelines, you can collect the same logs that you already send to other endpoints, such as Splunk HTTP Event Collector (HEC). Our pipelines will ingest the logs into Datadog, process them, and then send them to both Flex Logs and Splunk Index. Dual shipping the logs to Splunk and Datadog allows you to evaluate our Flex Logs solution and the Datadog platform without disrupting your engineers’ existing data flow with Splunk.
Managed Log Pipelines are best suited to help you with your initial assessment as you test new workflows. Once you’ve assessed and decided the best ways to collect, transform, and route your data, leverage Observability Pipelines to support your long-term, platform-level strategy. Observability Pipelines offers more granular customization and unparalleled visibility into your environment, so you are in complete control of your data from source to destination.
As stated in the example above, you can utilize Observability Pipelines to configure your logs to dual ship from Splunk HEC to Datadog and then forward them to Datadog and Splunk Index. You can also collect from another source and send to any combination of destinations you’d prefer without immediately adding or replacing any agents you already have installed in your environment. The Observability Pipelines Worker—software that runs on your infrastructure—will continue receiving data from your sources and sending it to your destinations so that you can avoid disrupting your production environment and migrate at your own speed.
Observability Pipelines can also help you determine whether your logs contain redundant information through the custom filters and metrics that you create. These custom filters and metrics help you control and re-route logs as you see fit, so you can reduce redundancy and costs. For example, you can configure your pipelines to re-route
INFO-level logs to low-cost storage solutions or convert critical logs to metrics up front. With this transparency, you can refrain from sending these logs anywhere and avoid unnecessary ingestion costs. Observability Pipelines also integrates with Sensitive Data Scanner, so you can prevent protected health information (PHI), personal identifiable information (PII), or payment card industry (PCI) data from leaking outside of your designated environments.
Managed Log Pipelines and Observability Pipelines give you the freedom to dual ship and experiment without disrupting your production environment. You can move at your own pace and analyze which dual shipping workflows and log management solutions can best scale and modernize with your organization. Leverage this flexibility to collect, transform, route, and enrich your data in the ways that fit your unique needs.
Managed Log Pipelines and Observability Pipelines are available to all Datadog customers. If you’re already a customer, you can get started using our managed Log Pipelines or Observability Pipelines documentation. Or, if you’re not yet a Datadog customer, you can sign up for a 14-day free trial.