What Is an MCP Server (Model Context Protocol Server)? | Datadog
What Is an MCP Server (Model Context Protocol Server)?

Applications

What Is an MCP Server (Model Context Protocol Server)?

Learn about the advantages of MCP servers for making use of AI models, services, and external tools for back-end services, observability, and security.

What is MCP (Model Context Protocol)?

Model Context Protocol (MCP) is an open standard for connecting AI agents to external tools, data, and services. It defines a common language for how clients (AI applications) and servers (tool providers) communicate — how tools are discovered, how requests are made, and how results are returned.

MCP was introduced by Anthropic in November 2024. OpenAI and Google DeepMind adopted the standard by mid-2025, and in December 2025, Anthropic transferred governance to the Agentic AI Foundation under the Linux Foundation. It’s now an industry-wide standard with support across Claude, ChatGPT, Cursor, Copilot, Windsurf, and others.

What is an MCP server?

An MCP server is a lightweight adapter that gives AI agents standardized, real-time access to external tools, data sources, and services — without requiring custom integration logic for each connection. It listens for requests from a client, executes them against an underlying system — a database, an API, a monitoring platform — and returns structured results.

Before MCP, connecting a tool to an AI platform meant writing a custom integration. A tool provider that wanted to support Claude, ChatGPT, and Copilot had to build and maintain three separate connectors. Add a fourth platform and you build a fourth connector. MCP replaces that: write one server, and any compliant client can use it.

Autonomous agents designed to utilize MCP tools can retrieve data, analyze and summarize it, and act upon a set criterion (such as an alert or message) using a single standard without requiring the development of an API or custom logic (see Table 1).

Traditional APIs vs. Model Context Protocol (MCP)

AspectTraditional APIsMCP
SetupManual, one-by-oneOne standard for all tools
FlexibilityFixed, tool-specificDynamic and adaptable
ReuseHard to reuse across agentsEasy to reuse everywhere
Agent compatibilityNeeds custom logic per toolWorks out of the box with schema for connectivity, tool discovery, and messaging
Tool discoveryManual configurationAutomatic, real-time

One key advantage MCP servers have over traditional APIs is that an MCP server can be optimized for AI agents, being context-efficient and less verbose. In contrast, traditional APIs provide highly structured but verbose outputs that can clog up and confuse AI agents.

If an agent developer builds their own tools to connect with a traditional API, they would also need to handle the conversion and optimization processes. However, an agent developer using a well-structured MCP server can rely on server integrations that are already optimized for out-of-the-box agents, resulting in improved performance from the start.

MCP servers serve as a bridge between AI agents and tool providers, enabling AI agents to request information, trigger actions, or integrate with internal and external systems in a controlled, standardized, scalable, and extensible manner.

Why are MCP servers important?

MCP servers provide an alternative to the specialized development or complex integrations needed to utilize AI models. Other essential characteristics of MCP servers include:

  1. Integration enablement: MCP servers provide a standardized, unified protocol that allows AI agents to connect to various systems without the need for custom API patterns for each use case, removing the requirement for tailored integrations.

  2. Security and control: MCP servers can establish clear rules for what agents can and cannot do, preventing unauthorized or unintended actions.

  3. Extensibility: One MCP server can serve multiple AI clients, helping to reduce the duplication of integration logic and making tools available to any MCP-compatible AI code agent, such as Cursor, Claude Code, and Codex. New functionality can be added to a system by connecting a new server without changing core functionality or other agents.

  4. Future-proofing: Agents built with MCP create a framework that standardizes integrations between different AI platforms that support the protocol. Additionally, tool developers can set up a single MCP server for their service, and LLM vendors can embed the service using one protocol, greatly simplifying the integration process.

How do MCP servers work?

Architecturally, MCP servers exist between an MCP host (an AI-powered application or platform, integrated development environment [IDE], or tool acting as an agent) and data sources (local resources such as file systems and databases, or remote services such as APIs or cloud services).

MCP servers are built on the following framework:

Protocol foundation

MCP is based on a structured, standardized, and secure messaging standard for agent-tool communication. The protocol defines how the client and server communicate, what messages look like, what actions are taken, and how results are returned.

Core functions

The server responds to requests from the AI agent (such as “check recent code changes on GitHub” or “pull Salesforce report”), assists the agent in selecting and invoking the appropriate MCP tools, and returns the requested results. Other core functionalities include discovering tools (such as new agents or functions made available), interpreting and executing commands, formatting results, handling errors, and providing meaningful feedback.

Extensibility

Developers can add custom commands, query handlers, and response formats through the agents that process MCP responses. MCP servers are like smart adapters for existing products and features, capable of handling a process from one tool (for example, “get today’s sales report from CRM”) and delivering it on demand as an agent.

Lifecycle

The lifecycle phases of an MCP server request include the following steps:

  1. Registration: The client registers with the MCP server. This step includes establishing protocol capabilities with the MCP server and an initialization notification.

  2. Capability discovery: The client learns which version and what aspects of MCP are implemented by the MCP server.

  3. Tool discovery: The client learns what tools and context the server offers (such as logs, metrics, traces, and spans in the Datadog context) through the server’s response, as in the example JavaScript Object Notation shown here:

{
     {
     "tools": [
          {
          "name": "get_datadog_incident",
          "description": "Retrieve detailed information about a specific Datadog incident by ID. This tool provides comprehensive incident details including status, severity, timeline, and associated users.",
          "inputSchema": {
               "type": "object",
               "properties": {
                    "incident_id": {
                    "description": "The ID of the incident. Either a number (ex: 1239) or a UUID",
                    "type": "string"
               },
               "max_tokens": {
                    "description": "Optional. Maximum number of tokens to include in the response (default: 10000)",
                    "type": "number"
               }
          },
          "required": [
            "incident_id"
     ]
     },
          "annotations": {
               "title": "Get Incident",
               "readOnlyHint": true,
               "destructiveHint": false,
               "idempotentHint": true,
               "openWorldHint": false
          }
     },
     {
     "name": "get_datadog_metric",
     "description": ...,
     "inputSchema": { ...
      },
      "annotations": { ...
      }
     },
     {
     "name": "get_datadog_trace",
      ...
     },
     ...
     ]
}
  1. Execution: The client sends requests to the MCP server, which executes these requests and returns relevant actions, data, or responses.

What are the benefits of MCP servers?

For DevOps, managed services, and other teams, the benefits of MCP servers for organizations integrating AI in their workflows include:

  1. Real-time data access: MCP servers enable AI agents to query databases, APIs, and files directly in real-time, ensuring responses are up-to-date and accurate, and eliminating the need for outdated or re-indexed data.

  2. Reduced risk: By managing data access through a central point, MCP minimizes the risk of data leaks and ensures that data is handled securely and compliantly.

  3. Reduced complexity and cost: MCP helps simplify and streamline complex and standalone integration efforts. Developers no longer need to write custom code for each new data source or AI model, significantly lowering development time and computational overhead.

  4. Improved scalability and flexibility: MCP servers provide a universal standard, allowing any AI model to connect to various systems without structural changes. This makes MCP ideal for organizations using multiple platforms and databases.

Numerous teams can benefit by deploying MCP servers. Some examples include:

  1. Developers using AI agents like Cursor, Claude Code, and Codex
  2. Developers building their own AI agents that interface with third-party data and tools
  3. Data teams needing secure and user-friendly AI-assisted querying
  4. Tool/plugin developers building for AI platforms

What are the use cases for MCP servers?

Specific use cases well-suited to deploying MCP servers include:

  1. Data retrieval: Examples include querying databases, APIs, or internal knowledge sources via natural language prompts. MCP servers can also be used for real-time data analysis and reporting. For example, an SRE can ask “what were the top error-rate spikes in the past hour?” and get an answer sourced directly from live telemetry — no dashboard, no context switching, no manual query construction.

  2. Workflow automation: AI agents can handle complex, multi-step tasks with minimal human input, including software development, customer communication, and business operations. Examples include triggering business processes, creating and updating tickets, and generating reports.

  3. Secure actions: A primary use case is the secure orchestration of automated workflows, such as a continuous integration/continuous deployment (CI/CD) pipeline where an AI agent, via the MCP server, creates a release branch, runs tests, deploys to staging, and sends notifications through a tool like Cisco Webex, all while ensuring actions are auditable and compliant. Other examples of secure actions include sending emails, updating systems, and making changes with safeguards.

  4. Domain-specific assistance: Examples include industry-specific MCP servers (such as those in finance, legal, and healthcare) that enable AI to access regulated datasets safely. In security operations, for example, MCP servers can give AI agents direct access to threat detection data, enabling a SOC analyst to ask “show me all failed login attempts from external IPs in the last hour” and receive a prioritized report without leaving their current workflow.

What are common implementation challenges for MCP servers?

As with any technology or platform integrated into an organization’s back end, there are distinct implementation challenges associated with deploying MCP. Some of these challenges include:

  1. Security overhead and issues: DevOps, security, and related teams can struggle to balance access for MCP servers with strict permission requirements and compliance demands. Additionally, MCP servers might be deployed with weak authentication standards, lack integrity controls that prevent message tampering, be granted excessive permissions, and be vulnerable to indirect prompts, injection attacks, and unintended actions from unsupervised AI agents.

  2. Protocol evolution: MCP’s specification continues to evolve, and version updates can require teams to re-evaluate their implementation. The March 2025 spec revision, for example, deprecated SSE as a transport mechanism in favor of Streamable HTTP — a change that affected any server built on the older standard. Teams should treat MCP version updates as part of their regular dependency management practice, not a one-time setup decision.

  3. Context efficiency: For MCP servers, the potential for a high volume of messaging between AI models and tools can result in correspondingly greater data volumes, including wasted tokens and increased traffic within an organization’s infrastructure. Teams need to be on the lookout for “chatty” or redundant connections that can lead to unexpected and greater costs from AI model providers and cloud services, such as data storage.

  4. Maintenance: Development, DevOps, and other teams involved with MCP servers should be aware of keeping code repositories and integrations up to date as APIs and data sources evolve. Teams should particularly focus on configuration reviews, clean environment installations, checking network settings, reviewing API requests and outputs, maintaining user permissions, and testing servers for security and vulnerability issues.

MCP server for Datadog

The Datadog MCP Server exposes this capability directly: AI agents can query metrics, retrieve incident details, pull distributed traces, and access logs through a standardized interface. That means any MCP-compatible client — Claude, Cursor, GitHub Copilot, and others — can bring Datadog’s observability data into the engineer’s existing workflow without custom integration work.

Related Content

Learn about Datadog at your own pace with these on-demand resources.

blog/datadog-remote-mcp-server/mcp-server-hero

BLOG

Datadog MCP Server: Connect your AI agents to Datadog tools and context
blog/how-to-use-ai-more-effectively/ai-tips

BLOG

How to use AI tools more effectively: Tips from Datadog Engineers
blog/datadog-bits-generative-ai/bits-hero-2

BLOG

Introducing Bits AI, your new DevOps copilot
blog/ai-insights-real-user-monitoring-datadog/rumdog-hero

BLOG

Automatically identify and efficiently investigate frontend issues with RUM Watchdog Insights
Get free unlimited monitoring for 14 days