
Bowen Chen
A growing number of engineering organizations have adopted or are trialing agentic AI-based coding tools and LLMs in an effort to increase their teams’ development velocity. If you’re a developer, this means you’ve likely had to try out different agentic tools and models and determine how to best incorporate them into your existing workflows. Some of you are likely well on your way to becoming AI power users, but for many others, you might have found yourself frustrated when AI agents produce derivative solutions or broken code, leaving you to wonder, “Am I the issue?”
To share some of the successes in effectively using AI coding tools at Datadog, we talked to several Datadog developers and asked them to share tips and practices that have helped them achieve better results when using AI agents and LLMs. We’ll discuss the following tips and how they can positively impact your AI workflow:
- Sometimes, the answer is just to think harder
- Spend more time planning so you can spend less time fixing
- Agents are capable of a lot, but with the right access, they’re capable of a lot more
- It’s not always about solving the problem but rather about how you should solve it
Tell your AI model to (literally) think harder
Most LLM providers assign costs based on token usage, and different AI models are built to consume a variable amount of tokens under a max consumption limit you set that varies depending on the breadth, complexity, and detail provided in your input. When you’re frustrated with the accuracy or depth of a given answer—in contrast to a human response—telling an LLM to literally think harder often improves the quality of its output by prompting the model to consume more tokens when generating a response. For example, Claude Code raises its token budget when the model encounters keywords such as “think harder,” “think longer,” or “ultrathink.”
If you’re frequently experiencing a gap between the model output and your expectations, how hard your model is “thinking” most likely isn’t the core issue. As developers, we often expect models to read our intent. When they misinterpret our ask or decide on an implementation that contradicts what may seem self-evident to us, it can be easy to blame the model and attribute the failure to its technical constraints. However, the success of your AI-assisted development is largely dependent on how well you identify and communicate the following information:
- Problem description: What is the pain point or problem statement I am trying to address?
- Desired execution: How should my desired solution address the previous problem?
- Problem bounding: What domain knowledge or additional reference material do I need to provide?
When you give your AI agent a prompt or task, it executes primarily based on the context provided in your input. So while it may technically have access to your system architecture, object and function syntax, and other resources that would be relevant in handling your prompt, the agent may not know to reference these tools unless it’s explicitly told to do so.
Conversely, it’s equally important to not overextend and provide the agent with too many unnecessary resources. Models have explicit context window limits—the hard cap on how many input tokens they’re able to read—as well as implicit ceilings on how much context they’re able to effectively address. The more context that’s made available to the LLM, the greater the likelihood that it introduces unnecessary complexity that degrades its efficacy. We’ll discuss how to tackle this issue in the next section.
Introduce a planning phase to your agentic coding workflow
With generative AI workflows—and across the entire software delivery life cycle (SDLC)—implementing delivery guardrails can greatly reduce costs and create better results by addressing issues earlier. One guardrail that has helped our developers tackle complex coding problems with AI is to implement a dedicated planning phase that takes place before the agent generates or modifies any code.
After you’ve identified and informed your LLM of the problem description, desired execution details, and its problem bounding, ask the model to break down its execution plan into individual steps, and have it explain the reasoning behind each step, as well as how it plans on implementing it. When an AI agent is unable to accomplish a particular task (due to insufficient context or other errors that the model encounters), it often adds additional layers of complexity or resorts to derivative solutions, such as generating backup code or entire shell scripts in attempts to patch the root error. By forcing the agent to type out its plan and by reviewing the implementation details it outlines, you can shift your feedback left and iteratively guide the agent to an implementation that is more correct and better aligned with the solution you’re seeking.
Also, as previously mentioned, LLMs often become decreasingly effective as the current context window grows. When the LLM runs out of context, it needs a summary of the previous conversation in a new context window. During this process, you risk losing many of the fine adjustments made up to that point. By refining an implementation plan, you can distill what you want the agent to achieve into a compact form and carry it over to a new context window, enabling the agent to make decisions without any repercussions from overextended context.
Connect your AI client to MCP servers to access tools in your tech stack
You may already be familiar with using agentic AI to debug and improve local source code, but what about problems that require visibility into external systems? For example, in order to update Terraform configs, an agentic tools needs to first validate the live state of your environment to avoid resource conflicts and drift, but they are unable to access this information via your cloud provider’s SDKs and API endpoints unless they have been assigned the proper IAM permissions. Even then, agents that fetch data using these ports of entry can run into frequent failures resulting from authorization errors, missing request parameters, or improperly parsing nested response structures.
Many cloud platforms now offer MCP servers that are designed to handle requests from LLMs and AI agents to access data and tools specific to their cloud services. By configuring and authorizing various MCP servers for your AI agent, you’re effectively extending the range of data it’s able to access and tasks it’s able to accomplish. For example, by configuring the Atlassian MCP Server, our developers are able to provide AI agents with context into team and service-specific engineering knowledge that we document in Confluence. This enables our developers to ask AI agents questions about service architecture or quickly fetch deployment and rollback procedures during incident response.
Datadog also offers our own remote MCP Server (available in Preview) that gives your AI agents access to Datadog tools and telemetry data. You can learn more about how to integrate Datadog observability into the Codex CLI and the different tools that the Datadog MCP Server offers in our respective blog posts.
Broaden your perspective on what AI should be used for
When developers discuss the power of agentic tools and LLMs, they often focus on how AI can act as a junior developer that helps trivialize manual development and testing. In this context, AI is helping the developer solve a problem faster. But what AI also excels at, which can be more easily overlooked, is helping the developer quickly identify alternative solutions to a problem.
When you’re starting at square one with a problem, testing potential solutions can be quite costly in both time and engineering resources. You can use an LLM to more quickly explore solutions and generate first-pass implementations that can be benchmarked. To manage costs, you can select a premium or higher-performance base AI model to conduct the research and planning phase of your implementation, then switch to a more economical model when it’s time to implement the actual solution in code.
During the research phase, ask your LLM to consider your problem constraints, explore different potential solutions, and conduct cost-benefit analysis for each solution; this can help you quickly narrow down top candidates. For example, agentic features such as Gemini Deep Research are able to browse weeks worth of research material and consolidate findings directly relevant to your constraints and needs. Once you have an idea of which solutions you’d like to look further into, you can ask your LLM to find supporting links to research papers, documentation, or public repository examples. You can also use LLMs as technical reviewers. Once you have an implementation plan or RFC, you can present it to the LLM and ask it to poke holes in your design. This can prompt deeper investigation into different tradeoffs and help you be more prepared for when designs are reviewed by stakeholders.
Learn more about AI at Datadog
At Datadog, our AI engineering teams are constantly exploring new AI technologies and methods to improve agentic AI output and make our own AI solutions even better. You can learn more about Datadog AI Agents and assistants, including Bits AI, Bits AI Dev, and Bits AI SRE, in our blog posts.
If you don’t already have a Datadog account, free 14-day trial today.