What is Model Context Protocol?
As organizations strive to enhance outcomes, reliability, and security, DevOps and similar teams are incorporating AI automation into their toolkits. With AI automation, AI agents can respond to and predict alerts, analyze data and summarize messages, monitor services and user access, write service plans, trigger pipelines, and deploy rollbacks. Model Context Protocol (MCP) servers enable standardized, structured, and secure communication between AI agents and external tools. MCP servers offer significant benefits for organizations seeking to integrate AI into their workflows, including reduced complexity and cost, as well as enhanced scalability and flexibility. This article examines MCP servers, their significance, how they function, their relevance in today’s DevOps landscape, potential use cases, and the implementation challenges that can arise from their use.
What is an MCP server?
MCP (Model Context Protocol) is an open standard created by Anthropic, which developed Claude (a family of conversational AI agents and large language models [LLMs]). MCP provides AI agents with a consistent way to connect to tools, services, and data. Most AI agents (such as ChatGPT, Microsoft Copilot, and Google Gemini) have a conversational interface that answers questions based on prompts. For more complex, robust AI-based tasks, developers rely on APIs and custom logic that can be fragile, non-standard, and require manual configuration. Autonomous agents designed to utilize MCP tools can retrieve data, analyze and summarize it, and act upon a set criterion (such as an alert or message) using a single standard without requiring the development of an API or custom logic (see Table 1).
Traditional APIs vs. Model Context Protocol (MCP)
| Aspect | Traditional APIs | MCP |
|---|---|---|
| Setup | Manual, one-by-one | One standard for all tools |
| Flexibility | Fixed, tool-specific | Dynamic and adaptable |
| Reuse | Hard to reuse across agents | Easy to reuse everywhere |
| Agent compatibility | Needs custom logic per tool | Works out of the box with schema for connectivity, tool discovery, and messaging |
| Tool discovery | Manual configuration | Automatic, real-time |
One key advantage MCP servers have over traditional APIs is that an MCP server can be optimized for AI agents, being context-efficient and less verbose. In contrast, traditional APIs provide highly structured but verbose outputs that can clog up and confuse AI agents.
If an agent developer builds their own tools to connect with a traditional API, they would also need to handle the conversion and optimization processes. However, an agent developer using a well-structured MCP server can rely on server integrations that are already optimized for out-of-the-box agents, resulting in improved performance from the start.
MCP servers serve as a bridge between AI agents and tool providers, enabling AI agents to request information, trigger actions, or integrate with internal and external systems in a controlled, standardized, scalable, and extensible manner.
Why are MCP servers important?
MCP servers provide an alternative to the specialized development or complex integrations needed to utilize AI models. Other essential characteristics of MCP servers include:
Integration enablement: MCP servers provide a standardized, unified protocol that allows AI agents to connect to various systems without the need for custom API patterns for each use case, removing the requirement for tailored integrations.
Security and control: MCP servers can establish clear rules for what agents can and cannot do, preventing unauthorized or unintended actions.
Extensibility: One MCP server can serve multiple AI clients, helping to reduce the duplication of integration logic and making tools available to any MCP-compatible AI code agent, such as Cursor, Claude Code, and Codex. New functionality can be added to a system by connecting a new server without changing core functionality or other agents.
Future-proofing: Agents built with MCP create a framework that standardizes integrations between different AI platforms that support the protocol. Additionally, tool developers can set up a single MCP server for their service, and LLM vendors can embed the service using one protocol, greatly simplifying the integration process.
How do MCP servers work?
Architecturally, MCP servers exist between an MCP host (an AI-powered application or platform, integrated development environment [IDE], or tool acting as an agent) and data sources (local resources such as file systems and databases, or remote services such as APIs or cloud services).
MCP servers are built on the following framework:
Protocol foundation
MCP is based on a structured, standardized, and secure messaging standard for agent-tool communication. The protocol defines how the client and server communicate, what messages look like, what actions are taken, and how results are returned.
Core functions
The server responds to requests from the AI agent (such as “check recent code changes on GitHub” or “pull Salesforce report”), assists the agent in selecting and invoking the appropriate MCP tools, and returns the requested results. Other core functionalities include discovering tools (such as new agents or functions made available), interpreting and executing commands, formatting results, handling errors, and providing meaningful feedback.
Extensibility
Developers can add custom commands, query handlers, and response formats through the agents that process MCP responses. MCP servers are like smart adapters for existing products and features, capable of handling a process from one tool (for example, “get today’s sales report from CRM”) and delivering it on demand as an agent.
Lifecycle
The lifecycle phases of an MCP server request include the following steps:
Registration: The client registers with the MCP server. This step includes establishing protocol capabilities with the MCP server and an initialization notification.
Capability discovery: The client learns which version and what aspects of MCP are implemented by the MCP server.
Tool discovery: The client learns what tools and context the server offers (such as logs, metrics, traces, and spans in the Datadog context) through the server’s response, as in the example JavaScript Object Notation shown here:
{
{
"tools": [
{
"name": "get_datadog_incident",
"description": "Retrieve detailed information about a specific Datadog incident by ID. This tool provides comprehensive incident details including status, severity, timeline, and associated users.",
"inputSchema": {
"type": "object",
"properties": {
"incident_id": {
"description": "The ID of the incident. Either a number (ex: 1239) or a UUID",
"type": "string"
},
"max_tokens": {
"description": "Optional. Maximum number of tokens to include in the response (default: 10000)",
"type": "number"
}
},
"required": [
"incident_id"
]
},
"annotations": {
"title": "Get Incident",
"readOnlyHint": true,
"destructiveHint": false,
"idempotentHint": true,
"openWorldHint": false
}
},
{
"name": "get_datadog_metric",
"description": ...,
"inputSchema": { ...
},
"annotations": { ...
}
},
{
"name": "get_datadog_trace",
...
},
...
]
}
- Execution: The client sends requests to the MCP server, which executes these requests and returns relevant actions, data, or responses.
What are the benefits of MCP servers?
For DevOps, managed services, and other teams, the benefits of MCP servers for organizations integrating AI in their workflows include:
Real-time data access: MCP servers enable AI agents to query databases, APIs, and files directly in real-time, ensuring responses are up-to-date and accurate, and eliminating the need for outdated or re-indexed data.
Reduced risk: By managing data access through a central point, MCP minimizes the risk of data leaks and ensures that data is handled securely and compliantly.
Reduced complexity and cost: MCP helps simplify and streamline complex and standalone integration efforts. Developers no longer need to write custom code for each new data source or AI model, significantly lowering development time and computational overhead.
Improved scalability and flexibility: MCP servers provide a universal standard, allowing any AI model to connect to various systems without structural changes. This makes MCP ideal for organizations using multiple platforms and databases.
Numerous teams can benefit by deploying MCP servers. Some examples include:
- Developers using AI agents like Cursor, Claude Code, and Codex
- Developers building their own AI agents that interface with third-party data and tools
- Data teams needing secure and user-friendly AI-assisted querying
- Tool/plugin developers building for AI platforms
Use cases for MCP servers
Specific use cases well-suited to deploying MCP servers include:
Data retrieval: Examples include querying databases, APIs, or internal knowledge sources via natural language prompts. MCP servers can also be used for real-time data analysis and reporting. For example, a WhatsApp MCP server allows users to generate summaries of message histories, categorize chats with AI-powered labels, and identify forgotten follow-ups, turning the personal messaging app into a structured communication hub.
Workflow automation: AI agents can handle complex, multi-step tasks with minimal human input, including software development, customer communication, and business operations. Examples include triggering business processes, creating and updating tickets, and generating reports.
Secure actions: A primary use case is the secure orchestration of automated workflows, such as a continuous integration/continuous deployment (CI/CD) pipeline where an AI agent, via the MCP server, creates a release branch, runs tests, deploys to staging, and sends notifications through a tool like Cisco Webex, all while ensuring actions are auditable and compliant. Other examples of secure actions include sending emails, updating systems, and making changes with safeguards.
Domain-specific assistance: Examples include industry-specific MCP servers (such as those in finance, legal, and healthcare) that enable AI to access regulated datasets safely. In finance, for example, MCP servers can connect with accounting platforms like Xero, providing access to professional financial data from multiple sources and enabling AI to perform tasks such as rating reports and analyzing market trends.
Common implementation challenges for MCP servers
As with any technology or platform integrated into an organization’s back end, there are distinct implementation challenges associated with deploying MCP. Some of these challenges include:
Security overhead and issues: DevOps, security, and related teams can struggle to balance access for MCP servers with strict permission requirements and compliance demands. Additionally, MCP servers might be deployed with weak authentication standards, lack integrity controls that prevent message tampering, be granted excessive permissions, and be vulnerable to indirect prompts, injection attacks, and unintended actions from unsupervised AI agents.
Protocol evolution: MCP is a new protocol, and changes to the specification can require updates and re-evaluation of the framework and its implementation for an organization. The newness of this platform requires research and consideration from teams who wish to deploy it. Additionally, a lack of familiarity with the protocol means developers and engineers might require a learning period to understand MCP’s deployment, message structure, and lifecycle.
Context efficiency: For MCP servers, the potential for a high volume of messaging between AI models and tools can result in correspondingly greater data volumes, including wasted tokens and increased traffic within an organization’s infrastructure. Teams need to be on the lookout for “chatty” or redundant connections that can lead to unexpected and greater costs from AI model providers and cloud services, such as data storage.
Maintenance: Development, DevOps, and other teams involved with MCP servers should be aware of keeping code repositories and integrations up to date as APIs and data sources evolve. Teams should particularly focus on configuration reviews, clean environment installations, checking network settings, reviewing API requests and outputs, maintaining user permissions, and testing servers for security and vulnerability issues.
Learn more
Datadog MCP Server provides DevOps and related teams with a monitoring solution that links AI agents to Datadog’s monitoring tools, improving monitoring performance, facilitating troubleshooting, supporting root-cause analysis, and enhancing incident response effectiveness.
