declarative dag-based workflow definition via yaml
Dagu parses YAML files into directed acyclic graphs (DAGs) where each step is a node with dependencies explicitly declared. The engine validates the DAG structure at parse time, detects cycles, and builds an execution plan that respects task dependencies. This file-based approach eliminates the need for a UI or database schema — workflows are version-controllable text artifacts that can be audited, diffed, and reviewed like code.
Unique: File-based YAML DAG definition with zero external dependencies — workflows are plain text artifacts that can be version-controlled, diffed, and audited like code, with cycle detection at parse time rather than runtime
vs alternatives: Simpler and more portable than Airflow (no Python/database required) and more transparent than cloud-native orchestrators (Temporal, Prefect) because the entire workflow definition is a single readable YAML file
single-binary distributed execution with local and remote task scheduling
Dagu compiles to a single Go binary that can run standalone on a laptop or scale to a distributed cluster by spawning worker processes or connecting to remote nodes. The engine uses a local scheduler for single-machine execution and supports remote task execution via SSH or custom executors. This architecture eliminates the need for separate control planes, message brokers, or container orchestration — the same binary handles both local cron-like scheduling and distributed task dispatch.
Unique: Single statically-compiled Go binary that scales from laptop to distributed cluster without external dependencies (no database, message broker, or control plane) — same binary handles local scheduling and remote task dispatch via SSH or custom executors
vs alternatives: More portable and self-contained than Airflow (no Python/database) and simpler to deploy than Kubernetes-native orchestrators (Argo, Temporal) because it's a single binary with optional remote execution rather than a distributed system requiring infrastructure setup
workflow dependency management and task ordering
Dagu enforces task ordering through explicit dependency declarations in YAML — each task specifies which tasks it depends on, creating a directed acyclic graph (DAG) of execution order. The engine validates dependencies at parse time, detects cycles, and builds an execution plan that respects the DAG. This ensures tasks run in the correct order without race conditions, and enables parallel execution of independent tasks.
Unique: Explicit dependency declaration with DAG validation and cycle detection at parse time — tasks specify their dependencies in YAML, and the engine builds an execution plan that respects the DAG and enables parallel execution of independent tasks
vs alternatives: More transparent than Airflow's implicit task ordering (dependencies are explicit in YAML, not inferred from code) and simpler than Temporal's workflow code because dependencies are declarative
workflow templating and reusable step definitions
Dagu supports defining reusable step templates that can be instantiated multiple times in a workflow with different parameters. Templates encapsulate common task patterns (e.g., 'run a Docker container', 'call an API', 'execute a script') and can be parameterized to avoid duplication. This enables DRY (Don't Repeat Yourself) workflow definitions where common patterns are defined once and reused across multiple workflows.
Unique: Built-in workflow templating with parameter substitution — reusable step templates can be defined once and instantiated multiple times with different parameters, reducing YAML duplication
vs alternatives: Simpler than Airflow's BaseOperator inheritance model (no Python code required) and more flexible than static YAML includes because templates support parameter substitution
graceful shutdown and signal handling for long-running workflows
Dagu implements signal handling (SIGTERM, SIGINT) to gracefully shut down running workflows and tasks. When a shutdown signal is received, the engine attempts to stop currently executing tasks cleanly (allowing them to finish or respond to signals) rather than forcefully killing them. This enables safe workflow interruption without data corruption or orphaned processes, and supports deployment scenarios where the Dagu daemon needs to be restarted or updated.
Unique: Built-in signal handling for graceful shutdown of running workflows and tasks — the engine responds to SIGTERM/SIGINT by cleanly stopping tasks rather than forcefully killing them, enabling safe restarts and updates
vs alternatives: More robust than shell scripts (which don't handle signals) and simpler than Kubernetes-native orchestrators (which require liveness/readiness probes) because signal handling is built into the Dagu binary
durable execution with automatic retry and failure recovery
Dagu tracks task execution state (pending, running, success, failure) and persists this state to enable automatic retries, resume-on-failure, and idempotent re-execution. When a task fails, the engine can automatically retry with exponential backoff or skip to the next step based on configured policies. Failed workflows can be resumed from the point of failure without re-executing completed steps, enabling long-running pipelines to recover from transient failures without manual intervention.
Unique: Automatic retry and resume-on-failure with state persistence — failed workflows can be resumed from the last failed step without re-executing completed tasks, using local filesystem or external storage for durability
vs alternatives: Simpler than Temporal or Durable Task Framework (no distributed consensus required) but more robust than shell scripts with manual retry logic because state is tracked and persisted automatically
cron-like scheduling with time-based and event-based triggers
Dagu embeds a cron scheduler that interprets standard cron expressions (minute, hour, day, month, day-of-week) to trigger workflows on a schedule. The scheduler runs as part of the Dagu daemon and can trigger workflows based on wall-clock time or custom events. This eliminates the need for external cron daemons or scheduling services — the workflow engine itself handles scheduling, making it suitable for air-gapped environments where external services are unavailable.
Unique: Embedded cron scheduler in the Dagu binary — no external cron daemon or scheduling service required, making it suitable for air-gapped environments and simplifying deployment
vs alternatives: More portable than system cron (works on Windows with WSL, Docker, cloud VMs) and more observable than traditional cron because execution history and failures are tracked in the workflow engine
web ui and rest api for workflow monitoring and control
Dagu exposes a web dashboard and REST API that provide real-time visibility into workflow execution, task status, logs, and history. The UI displays DAG visualizations, execution timelines, and task output; the API enables programmatic workflow triggering, status queries, and log retrieval. This allows operators to monitor and control workflows without SSH access or command-line tools, and enables integration with external systems (Slack notifications, custom dashboards, alerting systems).
Unique: Built-in web dashboard and REST API in the single Dagu binary — no separate monitoring service or UI deployment required, with real-time execution visibility and programmatic workflow control
vs alternatives: More integrated than Airflow (UI is part of the same binary, not a separate Flask app) and simpler than Temporal (no separate UI service) because monitoring and control are embedded in the workflow engine
+5 more capabilities