
Suraj Tikoo

Addie Beach
This guest blog comes from Suraj Tikoo, an Accenture consultant and Datadog Ambassador. Suraj specializes in helping clients integrate cloud solutions into their systems.
Testing is a critical part of modern software development. From user-facing UIs to complex backend services, every component needs to be thoroughly validated before release. But adopting new testing tools can be difficult, often requiring teams to adjust their workflows to accommodate tooling limitations.
In a recent project, I worked with a client who struggled to identify a flexible testing solution. Much of their work involved handling sensitive data subject to strict guardrails, making it difficult to use off-the-shelf tools. The client already used Datadog for performance monitoring. However, identifying how the platform could work for their testing needs was proving a challenge. Drawing on my past experience implementing Datadog for previous clients, I helped the team configure Datadog Synthetic Monitoring to cover all of their use cases, enabling them to accelerate their testing without additional tools.
In this post, we’ll explore:
- The core building blocks of a robust testing framework
- How the client solved key challenges with Datadog
Core building blocks of a robust testing framework
When creating a testing strategy, teams typically look for tools that can meet four key requirements:
Testing requirement | Purpose |
---|---|
Configuration management | Supports dynamic data and environment-specific settings |
Reusable modules | Promotes modularity and reduces redundancy |
Test execution scheduling | Controls how and when tests run |
Reporting and dashboards | Provides clear insights into test outcomes |
Platforms offer different combinations of these building blocks and implement them in various ways. These differences can make it challenging for teams to determine which platforms best meet their needs. In this case, my client wasn’t sure how to use these components to build a strategy to support their collection of large-scale, secure workflows. Let’s look at how my team was able to help them do this using Datadog.
Configuration management
Flexible configuration is essential for designing tests that can easily adapt to any scenario an app might encounter. Datadog enabled us to do this in a few practical ways, using features that provided better resiliency with minimal effort.
First, we identified elements that frequently appeared in the client’s testing scenarios and defined them as global variables. This let us reuse environment-specific CSS and XPath selectors, URLs, and credentials across multiple browser tests, without maintaining each test individually.

Additionally, we accounted for a variety of complex scenarios by injecting JavaScript assertions into the client’s tests. JavaScript assertions enable teams to assess conditions outside of Datadog’s default interactions and variable extraction options. For example, to evaluate flows that involved user logins, the client needed to be able to connect their tests to internal authentication services. By including JavaScript assertions that handled requests to these services, we effectively integrated authentication functionality—including multi-factor authentication—into their tests.
Reusable modules
Effective testing strategies should offer straightforward tools for creating reusable components that help developers scale their test suites efficiently. Global variables can help with this to some degree, but developers often need to replicate larger sets of steps across multiple tests. When application flows change, each of those tests needs to be maintained individually.
To reduce that overhead, Datadog provides subtests that behave like functions or methods in programming. These can be called across multiple browser tests as reusable units.

We used subtests to modularize common, critical user flows within the client’s app, such as:
- User logins: Much of the client’s app required user authentication for access, which meant developers would—without modularized components—need to manually add login functionality to nearly every test. Subtests allowed us to define this flow once and then easily reuse it throughout the test suite.
- Payment authorization: The client’s app used third-party payment gateways to help process user transactions. To ensure these gateways were reliable, developers needed to verify them repeatedly across different tests. Subtests made it easy to build the functionality once and then integrate it wherever needed.
This modularity enabled us to create clean, maintainable, and scalable test suites, drastically reducing the effort required to update or expand their test coverage.
Test execution scheduling
How and when tests are run can significantly affect resource usage and overall test coverage. Test scheduling is often shaped by system limitations and business requirements, including how user sessions and credentials are managed. Most teams use either parallel or sequential testing. While parallel testing is faster and often more resource-efficient, it also makes it harder to persist login states across tests, requiring new authentication tokens for each run.
To account for user access limitations, the client needed to run their tests sequentially to properly rotate their licenses. This quickly became unwieldy, with scheduled test runs frequently conflicting each other, leading to test errors and premature failures.
To simplify their text execution flows, we used the datadog-ci command-line interface (CLI) to create controlled batches of tests. We built 20 such batches, each containing 10 test cases with over 100 steps per case. In total, we automated around 200 test cases without any issues, effectively resolving the user dependency challenge and ensuring reliable execution. You can read more about how we achieved this in the next section.
Reporting and dashboards
Lastly, teams need a platform that makes it easy to analyze and share test results. Datadog Synthetic Monitoring provides a few out-of-the-box features that helped the client visualize their test runs, identify failures and performance bottlenecks, and compile relevant findings into digestible overviews.
These included:
- Clear pass or fail statuses
- Screenshot captures for each step
- Integrations with notification tools like Slack, PagerDuty, and email
- Historical overviews of response time

In addition to these features, the client wanted a way to share key results and trends with stakeholders. For example, part of their workflow involved compiling regular reports on how many test scenarios their teams were covering. To support this, we created a custom dashboard showing which test batches had run and their status. Then, using the Datadog API, we automated the delivery of this information to managers through regularly scheduled emails.
Solving key challenges with Datadog
As we helped the client develop their test suite, we encountered several challenges common to large-scale browser testing. By applying the building blocks provided by Datadog Synthetic Monitoring, we solved these issues effectively.
Large forms with dynamic fields
While developing the initial tests, we encountered a scenario involving a large form with many input fields. Creating a separate test step for each field would have made the client’s tests bulky, hard to maintain, and slow to execute.
To address this, we used custom JavaScript assertions to help capture and reuse field values. With JavaScript, we stored and retrieved field values by using the localStorage property, which enables developers to retain data without expiration.
Here are a few examples of the code we used:
Purpose | Code |
---|---|
Store field values | localStorage.setItem("email", document.querySelector("#email").value); |
Retrieve field values for assertions | const email = localStorage.getItem("email"); |
These assertions allowed us to significantly reduce the number of test steps, making the tests lighter, cleaner, and much easier to maintain.
MFA
MFA was a critical part of the client’s app, used to control access to sensitive information. They found that many traditional testing tools struggled with automating MFA, often requiring manual intervention or complex workarounds.
With Datadog’s support for JavaScript injection, the client integrated MFA directly into secure user test scenarios. For example, one-time password (OTP) verification was an important part of the client’s authentication processes. While Datadog provides out-of-the-box support for popular OTP generation methods, including time-based ones, the client used custom OTP services that weren’t automatically covered. By injecting JavaScript into the client’s test steps, we connected these services to their browser tests without relying on external tooling. And because MFA was required for many user flows, we implemented subtests to automate and reuse their authentication logic across multiple tests.
Logging the synthetic user in was only the first step, though—the client also needed to persist session tokens across test steps to minimize the need for constant re-authentication. Implementing localStorage enabled us to maintain session state throughout the entire test.
Test backups
Given the large scale of their system, one of the client’s key concerns was being able to recover their test suite if any issues arose. Their testing data represented hundreds of different scenarios, and losing it would have resulted in weeks, if not months, of reconstruction.
We found the solution in Terraform. Datadog’s robust Terraform support allowed us to treat the client’s test cases as Infrastructure as Code (IaC). By defining tests in Terraform, we could version-control their configurations, spin up or tear down test configurations across multiple environments, and maintain full reproducibility and auditability. You can view a sample of our Terraform code for more details on this setup.
Limited user access
A major issue the client encountered was scheduling test executions. Their app required session-based testing, meaning each test required a user login. However, they had limited access to user licenses, which resulted in frequent conflicts between test runs. For example, if a second test started while the first was still running with the same credentials, the new login would force the first session to log out and the initial test would fail.
Initially, we experimented with simply scheduling sequential test runs within Datadog, but we ran into the same problem. While we could roughly estimate how long it would take each test to run, test cases occasionally took longer than expected to execute and would overlap with the next scheduled batch, causing the new login to terminate the still-running test. This created a cascading failure scenario where subsequent tests could be caught in a loop of logouts and failures, making scheduling an unreliable solution.
Instead, we turned to controlled batching, which enabled us to schedule large test runs with minimal effort. In this batching approach, each test only executes once after the previous tests are fully completed, ensuring no session conflicts. Additionally, we were able to finely control the order of their test runs as necessary via bash commands within the datadog-ci CLI:
datadog-ci synthetics run-tests --test <test_id_1> --test <test_id_2> --file <config.json>
This approach helped us achieve comprehensive test coverage while still working within the client’s license restrictions.
Creating a strong testing strategy with Datadog Synthetic Monitoring
By implementing the client’s tests within Datadog, we streamlined the client’s workflows and accelerated their testing efforts. With browser and API tests in a single, unified platform, we minimized context switching, built highly reusable modules, and benefited from powerful automation features.
You can learn more about how to create your own synthetic tests by reading the Datadog Synthetic Monitoring documentation. Or, if you’re new to Datadog, sign up for a 14-day free trial.