dbt-native anomaly detection via statistical test generation
Elementary generates dbt test macros that collect time-series metrics (row counts, freshness, schema changes) directly within dbt runs and apply statistical anomaly detection algorithms (z-score, IQR, moving average baselines) to flag deviations. Tests execute natively in dbt's DAG, storing results in Elementary's metadata schema, eliminating separate monitoring infrastructure and enabling anomalies to fail dbt runs.
Unique: Implements anomaly detection as dbt test macros that execute within the dbt DAG rather than as external sidecars, enabling tests to fail dbt runs and store results in the warehouse's native metadata schema. Uses configuration-as-code YAML for threshold definition, allowing version control of detection rules alongside dbt models.
vs alternatives: Tighter dbt integration than Soda or Great Expectations (no separate orchestration needed), and lower operational overhead than cloud-native platforms like Databand since anomalies execute during standard dbt runs rather than requiring separate monitoring infrastructure.
dbt test result aggregation and impact lineage tracking
Elementary's dbt package and CLI parse dbt artifacts (manifest.json, run_results.json) to extract test metadata, execution times, and failure reasons, then correlates test failures with downstream model dependencies to surface which datasets are affected. Stores test lineage in Elementary's metadata schema, enabling root-cause analysis by tracing failures upstream through the DAG.
Unique: Parses dbt's native artifacts (manifest.json, run_results.json) to build lineage without requiring additional instrumentation or API calls to dbt Cloud. Stores lineage in the warehouse itself (Elementary's metadata schema) rather than external graph databases, enabling SQL-based impact queries.
vs alternatives: More lightweight than dbt Cloud's native lineage (no SaaS dependency) and more dbt-specific than generic data lineage tools like OpenMetadata, which require custom connectors. Integrates test results directly into lineage, unlike dbt Cloud which separates test results from DAG visualization.
elementary cloud synchronization and team collaboration
Elementary Cloud provides a managed SaaS platform that syncs monitoring data from open-source Elementary instances, enabling team collaboration, centralized dashboards, and advanced features (column-level lineage, AI-powered tests, team management). Cloud instances pull data from warehouse via Elementary CLI's `send-report` command or push via API, maintaining data residency while providing collaborative UI.
Unique: Provides optional managed Cloud platform that syncs with open-source Elementary instances via CLI push, enabling teams to upgrade to Cloud features without migrating data or changing dbt configuration. Maintains data residency by querying warehouse directly rather than copying data to Cloud.
vs alternatives: More flexible than dbt Cloud's observability (works with any dbt version) and more collaborative than self-hosted dashboards. Optional Cloud layer enables teams to start with open-source and upgrade without rearchitecting.
anonymous usage tracking and telemetry collection
Elementary CLI collects anonymous telemetry (command usage, feature adoption, error rates) via optional tracking module (elementary/tracking/tracking_interface.py) to inform product development. Tracking is opt-out and does not collect sensitive data (SQL, credentials, table names), enabling Elementary team to understand adoption patterns without compromising user privacy.
Unique: Implements opt-out telemetry with explicit privacy safeguards (no SQL, credentials, or table names collected), enabling product insights without compromising user data. Telemetry module is pluggable (elementary/tracking/tracking_interface.py), allowing users to implement custom tracking backends.
vs alternatives: More privacy-conscious than many open-source projects (explicitly excludes sensitive data) but less privacy-friendly than fully opt-in telemetry. Provides transparency about what data is collected.
configuration-as-code monitoring setup via dbt yaml
Elementary enables teams to define monitoring configuration (anomaly detection thresholds, freshness SLAs, alert routing) directly in dbt YAML files using the 'meta' field on models and columns. This approach treats monitoring configuration as code, enabling version control, code review, and reproducible monitoring setups. Configuration includes owner tags (meta.owner), anomaly detection parameters (meta.anomaly_detection), and custom metric definitions. The dbt package reads this configuration during runs to apply monitoring logic without separate configuration files.
Unique: Enables monitoring configuration to be defined in dbt YAML files (meta field on models/columns) and version-controlled alongside dbt code. Configuration is read by Elementary dbt package during runs, treating monitoring setup as code rather than separate configuration files or UI-based settings.
vs alternatives: More integrated with dbt workflows than UI-based configuration (Soda, Great Expectations Cloud) — monitoring configuration lives in dbt YAML and is version-controlled with dbt code, enabling code review and reproducible setups.
automated data quality report generation and distribution
Elementary CLI's `report` command generates a self-contained HTML dashboard aggregating test results, anomaly detections, model performance metrics, and data lineage into a single interactive report. The `send-report` command distributes reports via Slack, Teams, email, or uploads to S3/GCS, enabling async sharing of data quality status without requiring dashboard access.
Unique: Generates fully self-contained HTML reports (no external dependencies or JavaScript CDNs) that can be emailed or archived without requiring dashboard access. Integrates test results, anomalies, and lineage into a single report rather than requiring separate tools for each view.
vs alternatives: More accessible than dbt Cloud's native reporting (works with self-hosted dbt) and more comprehensive than simple test result summaries, combining anomalies, lineage, and performance metrics. Supports multiple distribution channels (Slack, Teams, email, S3) vs single-channel alternatives.
multi-warehouse metadata extraction and normalization
Elementary's warehouse client layer abstracts SQL dialects across Snowflake, BigQuery, Redshift, Databricks, and Postgres, providing a unified interface for querying metadata (table schemas, row counts, freshness timestamps, column statistics). Clients handle dialect-specific syntax for information_schema queries, enabling anomaly detection and lineage analysis to work identically across warehouses without custom logic per platform.
Unique: Implements warehouse-agnostic metadata extraction via a pluggable client architecture (elementary/clients/dbt/warehouse_client.py) that normalizes SQL dialects, enabling the same dbt package to work across 5+ warehouses without conditional logic. Stores all metadata in the warehouse itself rather than external systems.
vs alternatives: More warehouse-agnostic than dbt Cloud (which requires separate integrations per warehouse) and simpler than generic metadata tools like Collibra that require custom connectors. Metadata stored in warehouse enables SQL-based querying vs external APIs.
configurable alert filtering, grouping, and routing
Elementary's alerting system processes test failures and anomalies through a configuration-driven pipeline that filters alerts by severity/tags, groups related failures (e.g., all failures in a data mart), and routes to different channels (Slack, Teams, email) based on owner tags or custom rules. Alert deduplication prevents duplicate notifications for the same failure across multiple runs.
Unique: Implements alert configuration as dbt YAML (owners, tags, severity) rather than external alert management systems, enabling version control and co-location with data definitions. Deduplication logic prevents duplicate alerts for the same failure across multiple runs.
vs alternatives: More integrated with dbt than generic alerting tools (Opsgenie, PagerDuty) which require separate configuration. Simpler than ML-based alert correlation but sufficient for most data quality use cases.
+5 more capabilities