Integration Roundup: Monitoring Your AI Stack | Datadog

Integration roundup: Monitoring your AI stack

Author Shri Subramanian
Author Brittany Coppola
Author Addie Beach
Author Anjali Thatte

Last updated: February 28, 2024

Integrating AI, including large language models (LLMs), into your applications enables you to build powerful tools for data analysis, intelligent search, and text and image generation. There are a number of tools you can use to leverage AI and scale it according to your business needs, with specialized technologies such as vector databases, development platforms, and discrete GPUs being necessary to run many models. As a result, optimizing your system for AI often leads to upgrading your entire stack. However, doing so means re-evaluating your monitoring needs as well—otherwise, you risk adding complexity and silos to your observability strategy with the rapid introduction of new AI technologies.

With our collection of AI integrations, Datadog is at the forefront of delivering end-to-end monitoring across every layer of your AI tech stack. Each integration provides an out-of-the-box (OOTB) dashboard with metrics tailored to critical components. In this post, we’ll explore how these integrations help you monitor each AI layer:

Infrastructure and compute: NVIDIA DCGM Exporter, CoreWeave, Ray

To meet the needs of building, clustering, and monitoring your AI applications, your infrastructure must be able to support compute-intensive workloads. Starting with Agent v7.47, Datadog integrates directly with NVIDIA’s DCGM Exporter to help you gather metrics from NVIDIA’s discrete GPUs, which are essential to powering the parallel computing required by many AI-enabled applications. We’re also pleased to announce our integration with CoreWeave, a cloud provider that supplies infrastructure for efficiently scaling large, GPU-heavy workloads. Because Coreweave is built on Kubernetes, monitoring your Kubernetes pods and nodes in addition to your GPUs is a must.

The Datadog CoreWeave integration enables you to track performance and cost for your CoreWeave-managed GPUs and Kubernetes resources via an OOTB dashboard and pre-configured monitors. You can easily analyze usage alongside billing details, helping you ensure that your AI projects stay within budget. The integration also provides CPU and memory metrics for your pods, so you can quickly pinpoint overprovisioned resources that could bring your system to a halt. Let’s say you notice a steady increase in CoreWeave usage across your pods. You can pivot to the Datadog host map to determine whether this is an isolated issue and how much of your infrastructure you might need to upgrade.

The CoreWeave dashboard enables you to visualize memory and CPU metrics for every pod and container in your CoreWeave cluster.

Datadog’s Ray integration collects metrics and logs that help you monitor the health and performance of your Ray clusters as they scale your AI and high-compute workloads. Using the OOTB Ray dashboard and monitor templates, you can quickly detect Ray components that consume high CPU and memory resources, as well as Ray nodes that underutilize their available GPU. This enables you to optimize your resource efficiency even as your workloads scale. The dashboard also highlights failed and pending tasks across your cluster and identifies the reason for unavailable workers, enabling you to quickly resolve bottlenecks and restore operations.

Identify Ray components with high resource consumption using the OOTB Ray dashboard.

Data storage and management: Weaviate, Pinecone, Airbyte

Many AI models, particularly LLMs, are based on publicly available data that is both unstructured and uncategorized. Using this data as it is can make it difficult for models to generate useful inferences and meaningful output. As such, most organizations choose to use powerful yet complex vector databases to help them contextualize this information by combining it with their own enterprise data. Weaviate is one such open source database, giving you the ability to store, index, and scale both data objects and vector embeddings. To access fast setup and comprehensive support, you can also choose a fully managed option like Pinecone.

Datadog provides OOTB dashboards for both Weaviate (starting with Agent v7.47) and Pinecone that give you comprehensive insights into vector database health. These integrations include standard database metrics, such as request latencies, import speed, and memory usage. However, they also include metrics specifically tailored to vector database monitoring, including index operations and sizes as well as durations for object and vector batch operations.

With the Pinecone OOTB dashboard, you can view detailed index and vector metrics.

To help you populate and manage these databases, Datadog also offers monitoring for data integration engines like Airbyte, which enable you to consolidate your data for smooth processing. Airbyte extracts information from over 300 sources and loads it into data warehouses, data lakes, and databases using pre-built connectors. With the Airbyte integration, you can analyze your data transfer jobs and connections to determine the health of your syncs, helping you quickly spot issues that could impact data quality.

Model serving and deployment: Vertex AI, Amazon SageMaker, TorchServe, NVIDIA Triton Inference Server

To manage the massive amounts of information processing and training required to develop AI applications, you need a centralized platform for designing, testing, and deploying your models. Two of the most popular AI platforms are Vertex AI from Google and Amazon SageMaker. Each comes with their own benefits: Vertex AI enables you to leverage Google’s robust set of built-in data tools and warehouses, while SageMaker gives you comprehensive features to make deployments easier and more reliable, such as canary traffic shifting and serverless deployments.

In spite of their differences, both platforms have similar monitoring needs. To ensure that your infrastructure can support AI projects, you need to be able to track resource usage for your training jobs and inference endpoint invocations. Additionally, performance metrics are necessary for ensuring that your users experience low latency regardless of query type or size. With the Datadog Vertex AI and SageMaker integrations, you can access resource metrics—including CPU, GPU, memory, and network usage data—for all your training and inference nodes. Plus, the Vertex AI and SageMaker OOTB dashboards provide error, latency, and throughput metrics for your inference requests, so you can spot potential bottlenecks.

The SageMaker dashboards help you quickly determine the status of your endpoints and jobs, alongside a wealth of OOTB monitors.

In addition to deployment platforms, you can also use frameworks like PyTorch to build deep-learning applications with existing libraries and easily serve your models into production. PyTorch provides tools such as TorchServe to streamline the process of deploying PyTorch models. Included in Agent v7.47, our TorchServe integration continuously checks the health of your PyTorch models, helping you prevent faulty deployments. With the OOTB dashboard, you can access a wealth of model metrics for troubleshooting issues, including model versions and memory usage, in addition to health metrics for the TorchServe servers themselves.

The TorchServe dashboard gives you acccess to health metrics for your models and TorchServe servers.

Datadog also integrates with NVIDIA’s open source Triton Inference Server, which streamlines development by enabling teams to deploy AI models on major frameworks, including PyTorch, Tensorflow, ONNX, and more. With Datadog, you can ensure that model predictions are swift and responsive by visualizing key performance metrics such as inference latency and the number of failed or pending requests. You can also track caching activity, which is crucial to making sure that inferences are delivered efficiently.

The Triton Inference Server is designed to efficiently use both GPU and CPU resources to accelerate inference generation. Using Datadog’s out-of-the-box dashboard, you can correlate GPU and CPU utilization alongside the overall inference load of your Triton server to optimize resource usage and maintain high performance.

triton_screenshot.png

Models: OpenAI, Amazon Bedrock

The next layer of the AI tech stack is the AI models themselves. OpenAI offers popular generative AI models that provide text and image creation, as well as APIs that you can use to integrate this functionality into your application. Datadog already provides an OpenAI integration that helps you monitor usage across your organization, enabling your teams to optimize resource usage and stay within budget.

The latest version of the OpenAI integration comes with request and token consumption data for your OpenAI account. For service-level visibility and per-request tracking, the OpenAI integration now also supports the Node.js library, in addition to Python. Plus, you can track request latencies, error rates, and usage for more OpenAI API endpoints, including ones that support images, audio, and files.

With the latest version of the OpenAI integration, you can view enhanced cost and usage metrics for all of your requests.

To complement models from open source organizations, AWS has also launched generative AI services as part of their efforts to make AI more accessible to everyone. Amazon Bedrock is one such service, enabling developers to build and scale generative AI applications by providing API access to LLM foundation models from AI21 Labs, Anthropic, Stability AI, and Amazon. With Datadog’s upcoming integration with Bedrock, you can gain visibility into Bedrock API performance and usage.

Service chains and applications: LangChain, Amazon CodeWhisperer

Finally, once you’ve identified and configured the AI models you want to use, service chains can help you bridge together these models to create robust yet cohesive applications. LangChain is a popular service chain framework that enables you to build machine-learning (ML) models by combining easy-to-use modular components. The Datadog LangChain dashboard comes with visualizations for error rates, token counts, average prediction times, and request totals across all of your models, giving you deep insight into each component of your application. Additionally, it comes with a service map to help you evaluate usage across your models. The integration supports automatic detection for a number of different models, including OpenAI, Cohere, and Hugging Face, as well as vector databases like Pinecone.

The LangChain OOTB enables you visualize cost and usage trends for every model in your application.

To help you further innovate with AI models, you can also access tools that help you leverage them more effectively and integrate them into your existing workflows. Amazon CodeWhisperer is an AI coding companion that generates code suggestions to increase productivity and help you easily build with unfamiliar APIs. With Datadog’s CodeWhisperer OOTB dashboard, you can track the number of users accessing your CodeWhisperer instances and their overall usage over time, making it easier to manage costs.

Monitor your entire AI-optimized stack with Datadog

Keeping up with the latest in machine-learning technology requires you to quickly adapt your tech stack. In return, you also need to be able to pivot your monitoring strategy to prevent silos and blindspots from concealing meaningful issues.

With more than 700 integrations, Datadog provides insight into every layer of your AI stack, from your infrastructure to your models and service chains. You can use our documentation to get started with these integrations. Or, if you’re not yet a Datadog customer, you can sign up for a 14-day .