LLM Experiments: LLM Observability's Experiments | Datadog

LLM Experiments: LLM Observability's Experiments

About This Program

LLM Observability allows you to monitor, troubleshoot, and evaluate your LLM application.

  • Agentic Workflow Monitoring: LLM Observability’s new graph-based visualization for monitoring agentic workflows allows you to troubleshoot agentic executions, including decisions, data flow, and hand-offs. To qualify for this Preview feature, your application should be written in Python and you should be using the OpenAI Agent SDK, LangGraph, or CrewAI.
  • LLM Experiments: LLM Observability’s Datasets and Experiments allows you to build out ground truth dataset and run and compare experiments runs, so that you can confidently roll out to production after LLM, prompt, or code changes. To qualify for this Preview feature, your application should be written in Python.

Sign Up

Are you currently a Datadog customer? *
Which Preview feature(s) are you interested in? *
What is the language of your LLM applications or agents? (select all that apply) *
If you are interested in Agentic Workflow Monitoring, what frameworks are you using? (select all that apply)

Thank you for your submission!

Your response has been recorded. The LLM Observability team is reviewing your request and will aim to follow up with you within the next 2 weeks. In the meantime, feel free to reach out to your CSM with any questions.

Related Resources

Interested in more of our latest features?

Help make the next releases of Datadog products our best yet.