dagster vs Power Query
Side-by-side comparison to help you choose.
| Feature | dagster | Power Query |
|---|---|---|
| Type | Repository | Product |
| UnfragileRank | 30/100 | 32/100 |
| Adoption | 0 | 0 |
| Quality | 0 | 1 |
| Ecosystem | 0 |
| 0 |
| Match Graph | 0 | 0 |
| Pricing | Free | Paid |
| Capabilities | 14 decomposed | 18 decomposed |
| Times Matched | 0 | 0 |
Enables developers to define data assets as Python functions decorated with @asset, automatically constructing a directed acyclic graph (DAG) of dependencies through function parameter matching and explicit asset_deps declarations. The system parses asset definitions at load time, resolves dependencies via asset keys, and builds an in-memory graph representation that tracks lineage, partitioning schemes, and materialization requirements without requiring manual DAG specification.
Unique: Uses decorator-based asset definitions with automatic dependency inference via function parameters, eliminating explicit DAG construction code; integrates with Python's type system for IDE support and enables asset-centric rather than job-centric pipeline organization
vs alternatives: Simpler than Airflow's DAG construction and more asset-focused than dbt's model-only approach; provides automatic lineage without requiring separate metadata files
Implements a sophisticated partitioning system allowing assets to be divided across time-based (daily, hourly), static categorical, or dynamically-generated partitions, with support for multi-dimensional partitioning (e.g., date × region). The system tracks partition state, enables targeted backfills, and optimizes execution by only materializing changed partitions. Partition definitions are composable and integrate with the asset graph to automatically determine which partitions need execution.
Unique: Supports dynamic partitions that are generated at runtime via user-defined functions, enabling partition schemes that adapt to data without code changes; integrates partition state tracking directly into the asset system rather than as a separate concern
vs alternatives: More flexible than dbt's static partitioning; provides first-class support for dynamic partitions unlike Airflow's XCom-based approaches; enables efficient backfills without full DAG re-execution
Tracks asset freshness (time since last materialization) and health status (latest run success/failure) via the asset health system. Freshness policies define expected materialization intervals (e.g., daily); the system compares actual freshness against policies and marks assets as stale. Health status is queryable via GraphQL and can trigger alerts via sensors. Integration with external systems (Slack, PagerDuty) enables notifications when assets become unhealthy.
Unique: Integrates freshness policies directly into asset definitions, enabling declarative SLA enforcement; computes health status from event logs without external monitoring tools
vs alternatives: More integrated than Airflow's SLA framework; provides asset-level freshness unlike dbt's model-level approach; enables automatic health tracking without external tools
Provides AssetSelection API enabling programmatic selection of assets based on keys, tags, groups, or custom predicates. Selections can be composed (union, intersection, difference) and used to target specific assets for execution, backfills, or queries. The system resolves dependencies automatically, ensuring upstream assets are included in execution. Selections are queryable via GraphQL, enabling external systems to discover which assets will be executed.
Unique: Provides composable asset selection with automatic dependency resolution, enabling flexible targeting without code changes; selections are first-class objects queryable via GraphQL
vs alternatives: More flexible than Airflow's fixed DAG selection; enables tag-based targeting unlike dbt's model-level approach; supports composition operators for complex selections
Implements a configuration system enabling assets, resources, and jobs to accept configuration dictionaries at definition or execution time. Configuration is specified via ConfigurableResource base class or @resource decorator, with schema validation via Pydantic. Environment-specific configs are loaded from YAML files or environment variables, enabling dev/staging/prod deployments without code changes. Configuration is resolved at execution time and injected into asset context.
Unique: Integrates configuration management directly into resource definitions via ConfigurableResource, enabling schema validation and environment-specific overrides without separate config files
vs alternatives: More integrated than Airflow's Variable system; provides schema validation unlike dbt's profiles.yml; enables runtime overrides without code changes
Tracks asset versions based on code changes, enabling detection of when asset definitions change and triggering re-materialization of downstream assets. Asset lineage is reconstructed from event logs, showing data flow across the pipeline. Data contracts (input/output schemas) can be defined on assets, with validation at execution time to detect schema mismatches. Lineage is queryable via GraphQL and visualizable in the UI.
Unique: Integrates asset versioning directly into the asset system, enabling automatic detection of code changes and downstream re-materialization; tracks lineage from event logs without external tools
vs alternatives: More automated than dbt's version tracking; provides data contracts unlike Airflow; enables lineage reconstruction without external metadata stores
Captures detailed execution events (AssetMaterializationEvent, DagsterEventType) during asset computation, including execution time, data quality metrics, row counts, and custom metadata. Events are persisted to configurable event log storage (SQLite, PostgreSQL, in-memory) and queryable via GraphQL, enabling real-time monitoring, data lineage reconstruction, and post-execution analysis without requiring external observability tools.
Unique: Implements event sourcing for asset execution, storing immutable event records that enable complete reconstruction of pipeline state; integrates metadata capture directly into the execution model rather than as post-hoc logging
vs alternatives: More comprehensive than Airflow's task logs; provides structured event queries via GraphQL unlike dbt's file-based artifacts; enables real-time monitoring without external APM tools
Provides two complementary automation mechanisms: Sensors poll external systems (databases, APIs, file systems) on a configurable interval to detect changes and trigger asset materialization, while Schedules execute assets on cron expressions or custom timing logic. Both are defined as Python functions decorated with @sensor or @schedule, integrated into the asset daemon that runs continuously to evaluate automation rules and submit runs to the executor.
Unique: Unifies schedule and sensor automation under a single declarative model with shared tick tracking; sensors maintain cursor state to avoid reprocessing, enabling efficient polling of external systems
vs alternatives: More flexible than Airflow's fixed scheduling; provides built-in sensor framework unlike dbt which relies on external orchestrators; enables event-driven automation without message queues
+6 more capabilities
Construct data transformations through a visual, step-by-step interface without writing code. Users click through operations like filtering, sorting, and reshaping data, with each step automatically generating M language code in the background.
Automatically detect and assign appropriate data types (text, number, date, boolean) to columns based on content analysis. Reduces manual type-setting and catches data quality issues early.
Stack multiple datasets vertically to combine rows from different sources. Automatically aligns columns by name and handles mismatched schemas.
Split a single column into multiple columns based on delimiters, fixed widths, or patterns. Extracts structured data from unstructured text fields.
Convert data between wide and long formats. Pivot transforms rows into columns (aggregating values), while unpivot transforms columns into rows.
Identify and remove duplicate rows based on all columns or specific key columns. Keeps first or last occurrence based on user preference.
Detect, replace, and manage null or missing values in datasets. Options include removing rows, filling with defaults, or using formulas to impute values.
Power Query scores higher at 32/100 vs dagster at 30/100. dagster leads on ecosystem, while Power Query is stronger on quality. However, dagster offers a free tier which may be better for getting started.
Need something different?
Search the match graph →© 2026 Unfragile. Stronger through disorder.
Apply text operations like case conversion (upper, lower, proper), trimming whitespace, and text replacement. Standardizes text data for consistent analysis.
+10 more capabilities