
Tom Sobolik
dbt is one of the most popular solutions for data transformations and modeling. Many commercial data pipelines rely on dozens, or even hundreds, of individual dbt jobs. Data engineers, data platform engineers, and analytics engineers who own these pipelines need to maintain a testing framework to prevent mistakes in data processing that can compromise analysis. Data quality testing is highly important for catching data processing errors early to prevent regressions, ensure trust and reliability, and enable scaling and maintainability in organizations’ usage of data.
While standard dbt tests can confirm basic assertions, such as that a order_date column exists and isn’t null, they can’t tell you that yesterday’s data never arrived or that 80% of your order IDs suddenly started with TEST_ instead of your usual format. These gaps require more sophisticated assertions than dbt’s four native tests provide. dbt-expectations is a free and open source package maintained by Datadog that extends dbt’s core testing functionality with more complex assertions to handle these kinds of scenarios.
dbt-expectations is currently being used by thousands of organizations to run data quality checks for their dbt models. In this post, we’ll first introduce key data quality issues and show how you can use dbt-expectations to test for them. Then, we’ll include tips for monitoring dbt-expectations checks in CI/CD to ensure that your tests are effective.
Test for common data quality issues
By providing a suite of tests that can be implemented directly in your dbt models, dbt-expectations lets you gate your pipelines with Great Expectations-style assertions that extend beyond the capabilities of dbt’s built-in tests (unique, not_null, accepted_values, relationships). Let’s explore four primary scenarios where extended data quality testing functionality is needed:
- Complex data validation requirements
- Time series data quality
- Statistical validation
- Cross-column validation
Complex data validation requirements
Schema validation for nested data or semi-structured data such as JSON is important to ensure that incomplete or incorrectly formatted data doesn’t break downstream processing and reporting. For JSON-like columns, a pragmatic approach is to assert that required keys are present and values follow your format contract; for strings (emails, IDs, order codes), regex tests let you encode business rules directly in YAML. The expect_column_values_to_match_regex check provides flexible string-matching checks. To validate whether your inputs are valid as JSON or another schema, you can add string-matching assertions for the presence of all required keys and for values to follow the correct formatting.
Time series data quality
It’s critical to catch stale or incomplete data quickly to avoid impacting end users. You can use expect_grouped_row_values_to_have_recent_data_checks to assert that new rows have landed within an SLA and pair that with expect_row_values_to_have_data_for_every_n_datepart to ensure completeness within a specified time window. Additionally, dbt’s native freshness parameter lets you alert on freshness issues at the source layer. For instance, the following configuration creates freshness and completeness checks for a model called user_signups.
models: - name: user_signups tests: - dbt_expectations.expect_grouped_row_values_to_have_recent_data: group_by: [source] timestamp_column: order_timestamp datepart: hour interval: 1 - dbt_expectations.expect_row_values_to_have_data_for_every_n_datepart: timestamp_column: order_date datepart: day interval: 1Statistical validation
dbt-expectations’ statistical validation checks are great for catching anomalies where key metrics unexpectedly deviate from expected ranges. Distribution drift and outliers in metrics can signal upstream issues. dbt-expectations checks like expect_column_values_to_be_within_n_moving_stdevs enable you to check for deviations from a moving average without having to hand-tune thresholds for different tables. This enables you to catch unexpected behavior in the data while taking seasonality and acceptable deviations into account. For instance, when monitoring sales traffic, acceptable deviations in volume from holidays or planned campaigns shouldn’t trigger your monitors. But an extreme spike in orders from a single affiliate link (indicating bot traffic) or a near-zero plummet in mobile orders (indicating an issue with your store’s mobile site) would be statistically significant and require a response. The following example shows a expect_column_values_to_be_within_n_moving_stdevs test with source groupings to help catch these different scenarios.
models: - name: fact_orders_daily columns: - name: orders tests: - dbt_expectations.expect_column_values_to_be_within_n_moving_stdevs: # Treat each day as a point in the series partition_by: order_date # Compare each channel to its own recent behavior group_by: [source] window: 28 # 4-week lookback smooths weekly seasonality n_stdevs: 3 # flag anything beyond 3σ # optional: act harder on revenue channels severity: errorCross-column validation
To prevent subtle logic errors from propagating through your pipelines, it’s critical to test your models to validate logical relationships between multiple fields across columns. These relationships make up cross-field data contracts: Timestamps must move forward, enums must align across tables, and composite keys (not necessarily individual fields) must be unique. Checks like expect_column_pair_values_A_to_be_greater_than_B can help you enforce logical ordering.
The expect_compound_columns_to_be_unique enforces that a combination of values across multiple columns is unique. For instance, let’s say you have a model that consolidates e-commerce transactions from multiple sales channels so that an order ID can be repeated across channels, but the pair (order_id, source) should always be unique. The following configuration creates a compound uniqueness check for this combination of fields.
models: - name: fact_orders description: "Aggregated order-level fact table combining multiple order sources." tests: - dbt_expectations.expect_compound_columns_to_be_unique: column_list: - order_id - source row_condition: "order_status != 'cancelled'" severity: errorTips for implementing dbt-expectations
Now that we’ve shown some key use cases, let’s discuss how to approach implementing dbt-expectations checks in your pre-production systems:
Monitor your checks within CI
When a model is edited and merged into the main branch without testing, downstream models may fail to build. Or, a critical table may break, leaving consumers frustrated with outdated or missing data. By adding checks to your CI pipelines, you can catch regressions before they lead to bad data, broken analysis, and pipeline failures.
dbt’s automated testing framework enables you to run your dbt-expectations tests whenever engineers make changes to data models. By using CI/CD visibility tools, you can monitor these test runs to track failures, identify flaky tests, and trace complex end-to-end runs to troubleshoot faster. For more information about dbt testing in CI, see the dbt documentation.
Avoid alert fatigue
If your data quality checks aren’t streamlined well to catch and distinguish between critical problems in key components and non-blocking alerts for known edge cases, significant noise in your monitoring system can arise. Rather than testing for every column in every table, start with business-critical data points, such as ones that directly power the most significant dashboards, reports, or downstream models used for KPIs. For instance, this could mean limiting your testing to the “critical path” of your pipelines, covering low-impact staging columns with a smoke test that simply verifies whether data is flowing.
To ensure that your alerts are actionable and reach the appropriate stakeholders, it’s also important that your dbt-expectations checks are well-documented and use appropriate severity levels. By mapping tests to clear actions, you can remove ambiguity to speed up remediation and reduce alert fatigue:
- Define
CRITICALchecks that should break the pipeline - Define
ERRORchecks that should notify the ops team - Allow legitimate edge cases through the pipeline with
WARNlabels
It’s critical to use your test configurations to document the reasoning behind specific alert thresholds to help responders interpret triggered monitors, along with tips for what to investigate in case of failure, who to contact, and so on. For instance, the description field for each of your high-signal tests could include information such as:
- Assumption: describe the expected operating condition that the test is designed to enforce
- Owner and impact: declare which potentially affected teams to ping in the event of a failure
- Investigation steps: list any standard diagnostic strategies for this test
Ensure data quality in your dbt models
Data quality checks are an essential early warning system that prevents regressions, builds trust in downstream dashboards, and lets teams scale changes to their data pipelines with confidence. dbt-expectations strengthens those gates by extending dbt’s native tests to cover complex validations (schema and regex), time-series freshness and completeness, statistical distributions and outliers, and cross-column logic and compound uniqueness, so the same CI checks that keep code healthy can keep data healthy, too.
dbt-expectations is an open source package maintained by Datadog, and it’s designed to meet you where you already work: in your models and your CI/CD pipelines. Check out the documentation to get started.
Datadog Data Observability and CI Test Optimization can help you monitor your pipelines and tests to maintain their health and performance. For more information, see our documentation. To get started with Datadog, sign up for a free trial.





