dagu
WorkflowFreeA lightweight workflow engine built the way it should be: declarative, file-based, self-contained, air-gapped ready. One binary that scales from laptop to distributed cluster. Used as a sovereign AI-agent orchestration infrastructure.
Capabilities13 decomposed
declarative dag-based workflow definition via yaml
Medium confidenceDagu parses YAML files into directed acyclic graphs (DAGs) where each step is a node with dependencies explicitly declared. The engine validates the DAG structure at parse time, detects cycles, and builds an execution plan that respects task dependencies. This file-based approach eliminates the need for a UI or database schema — workflows are version-controllable text artifacts that can be audited, diffed, and reviewed like code.
File-based YAML DAG definition with zero external dependencies — workflows are plain text artifacts that can be version-controlled, diffed, and audited like code, with cycle detection at parse time rather than runtime
Simpler and more portable than Airflow (no Python/database required) and more transparent than cloud-native orchestrators (Temporal, Prefect) because the entire workflow definition is a single readable YAML file
single-binary distributed execution with local and remote task scheduling
Medium confidenceDagu compiles to a single Go binary that can run standalone on a laptop or scale to a distributed cluster by spawning worker processes or connecting to remote nodes. The engine uses a local scheduler for single-machine execution and supports remote task execution via SSH or custom executors. This architecture eliminates the need for separate control planes, message brokers, or container orchestration — the same binary handles both local cron-like scheduling and distributed task dispatch.
Single statically-compiled Go binary that scales from laptop to distributed cluster without external dependencies (no database, message broker, or control plane) — same binary handles local scheduling and remote task dispatch via SSH or custom executors
More portable and self-contained than Airflow (no Python/database) and simpler to deploy than Kubernetes-native orchestrators (Argo, Temporal) because it's a single binary with optional remote execution rather than a distributed system requiring infrastructure setup
workflow dependency management and task ordering
Medium confidenceDagu enforces task ordering through explicit dependency declarations in YAML — each task specifies which tasks it depends on, creating a directed acyclic graph (DAG) of execution order. The engine validates dependencies at parse time, detects cycles, and builds an execution plan that respects the DAG. This ensures tasks run in the correct order without race conditions, and enables parallel execution of independent tasks.
Explicit dependency declaration with DAG validation and cycle detection at parse time — tasks specify their dependencies in YAML, and the engine builds an execution plan that respects the DAG and enables parallel execution of independent tasks
More transparent than Airflow's implicit task ordering (dependencies are explicit in YAML, not inferred from code) and simpler than Temporal's workflow code because dependencies are declarative
workflow templating and reusable step definitions
Medium confidenceDagu supports defining reusable step templates that can be instantiated multiple times in a workflow with different parameters. Templates encapsulate common task patterns (e.g., 'run a Docker container', 'call an API', 'execute a script') and can be parameterized to avoid duplication. This enables DRY (Don't Repeat Yourself) workflow definitions where common patterns are defined once and reused across multiple workflows.
Built-in workflow templating with parameter substitution — reusable step templates can be defined once and instantiated multiple times with different parameters, reducing YAML duplication
Simpler than Airflow's BaseOperator inheritance model (no Python code required) and more flexible than static YAML includes because templates support parameter substitution
graceful shutdown and signal handling for long-running workflows
Medium confidenceDagu implements signal handling (SIGTERM, SIGINT) to gracefully shut down running workflows and tasks. When a shutdown signal is received, the engine attempts to stop currently executing tasks cleanly (allowing them to finish or respond to signals) rather than forcefully killing them. This enables safe workflow interruption without data corruption or orphaned processes, and supports deployment scenarios where the Dagu daemon needs to be restarted or updated.
Built-in signal handling for graceful shutdown of running workflows and tasks — the engine responds to SIGTERM/SIGINT by cleanly stopping tasks rather than forcefully killing them, enabling safe restarts and updates
More robust than shell scripts (which don't handle signals) and simpler than Kubernetes-native orchestrators (which require liveness/readiness probes) because signal handling is built into the Dagu binary
durable execution with automatic retry and failure recovery
Medium confidenceDagu tracks task execution state (pending, running, success, failure) and persists this state to enable automatic retries, resume-on-failure, and idempotent re-execution. When a task fails, the engine can automatically retry with exponential backoff or skip to the next step based on configured policies. Failed workflows can be resumed from the point of failure without re-executing completed steps, enabling long-running pipelines to recover from transient failures without manual intervention.
Automatic retry and resume-on-failure with state persistence — failed workflows can be resumed from the last failed step without re-executing completed tasks, using local filesystem or external storage for durability
Simpler than Temporal or Durable Task Framework (no distributed consensus required) but more robust than shell scripts with manual retry logic because state is tracked and persisted automatically
cron-like scheduling with time-based and event-based triggers
Medium confidenceDagu embeds a cron scheduler that interprets standard cron expressions (minute, hour, day, month, day-of-week) to trigger workflows on a schedule. The scheduler runs as part of the Dagu daemon and can trigger workflows based on wall-clock time or custom events. This eliminates the need for external cron daemons or scheduling services — the workflow engine itself handles scheduling, making it suitable for air-gapped environments where external services are unavailable.
Embedded cron scheduler in the Dagu binary — no external cron daemon or scheduling service required, making it suitable for air-gapped environments and simplifying deployment
More portable than system cron (works on Windows with WSL, Docker, cloud VMs) and more observable than traditional cron because execution history and failures are tracked in the workflow engine
web ui and rest api for workflow monitoring and control
Medium confidenceDagu exposes a web dashboard and REST API that provide real-time visibility into workflow execution, task status, logs, and history. The UI displays DAG visualizations, execution timelines, and task output; the API enables programmatic workflow triggering, status queries, and log retrieval. This allows operators to monitor and control workflows without SSH access or command-line tools, and enables integration with external systems (Slack notifications, custom dashboards, alerting systems).
Built-in web dashboard and REST API in the single Dagu binary — no separate monitoring service or UI deployment required, with real-time execution visibility and programmatic workflow control
More integrated than Airflow (UI is part of the same binary, not a separate Flask app) and simpler than Temporal (no separate UI service) because monitoring and control are embedded in the workflow engine
task-level environment variable and parameter injection
Medium confidenceDagu supports passing environment variables and parameters to individual tasks through YAML configuration, command-line arguments, or API calls. Variables can be defined globally (workflow-level), per-task, or dynamically from previous task outputs. The engine substitutes variables into task commands before execution, enabling parameterized workflows that adapt to different environments (dev, staging, prod) without modifying the YAML definition.
Task-level variable injection with support for output chaining — variables can be defined globally, per-task, or captured from previous task outputs, enabling parameterized workflows without hardcoding environment-specific values
Simpler than Airflow's XCom (no database required) and more flexible than shell script parameter passing because variables are managed at the workflow level with built-in substitution
conditional task execution and branching logic
Medium confidenceDagu supports conditional execution of tasks based on the exit code or output of previous tasks. Tasks can be marked as optional (continue on failure), skipped based on conditions, or executed only if upstream tasks succeed. This enables branching workflows where different paths are taken based on runtime conditions, without requiring explicit if-then-else constructs — the DAG structure itself encodes the branching logic through task dependencies and conditions.
Conditional execution encoded in DAG structure through task dependencies and exit code conditions — no explicit if-then-else constructs, enabling simple branching logic without adding control flow complexity
Simpler than Airflow's BranchPythonOperator (no Python code required) and more transparent than Temporal's workflow code because conditions are declarative in YAML
task output capture and inter-task communication
Medium confidenceDagu captures stdout/stderr from task execution and makes it available to downstream tasks through variable substitution or API queries. Tasks can write structured output (JSON, key=value pairs) that is parsed and injected as environment variables for subsequent tasks. This enables data flow through the workflow — one task produces output that becomes input to the next task, without requiring external message queues or databases.
Built-in output capture and variable injection for inter-task communication — tasks write to stdout and Dagu automatically parses and injects output as environment variables for downstream tasks, enabling data flow without external storage
Simpler than Airflow's XCom (no database required) and more direct than message queue-based systems because data flows through environment variables and stdout parsing
workflow execution history and audit logging
Medium confidenceDagu maintains a persistent execution history of all workflow runs, including task status, exit codes, start/end times, and logs. This history is queryable via the REST API and web UI, enabling audit trails, performance analysis, and debugging. The engine stores execution metadata (who triggered the workflow, when, with what parameters) and task-level details (duration, resource usage if available), providing full observability into workflow behavior over time.
Built-in execution history and audit logging in the Dagu binary — no separate logging service required, with queryable history via REST API and web UI for compliance and debugging
More integrated than Airflow (history is part of the same binary, not a separate database) and simpler than enterprise logging systems (ELK, Splunk) because history is managed locally by the workflow engine
custom executor plugins for task execution
Medium confidenceDagu supports custom executors that define how tasks are executed — beyond the default shell command execution. Executors can be implemented as external programs or plugins that receive task definitions and execute them in custom environments (Docker containers, Kubernetes pods, remote services, custom runtimes). This enables Dagu to orchestrate tasks across heterogeneous infrastructure without modifying the core engine or workflow definitions.
Pluggable executor architecture enabling custom task execution environments — executors can be external programs that receive task definitions and execute them in custom runtimes (Docker, Kubernetes, serverless) without modifying the core engine
More flexible than Airflow's operator model (executors are external, not Python classes) and simpler than Temporal's worker model because executors are decoupled from the workflow engine
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with dagu, ranked by overlap. Discovered automatically through the match graph.
ms-agent
MS-Agent: a lightweight framework to empower agentic execution of complex tasks
txtai
All-in-one open-source AI framework for semantic search, LLM orchestration and language model workflows
agents-shire
AI agent orchestration platform
Argo Workflows
Kubernetes-native workflow engine.
Portia AI
Open source framework for building agents that pre-express their planned actions, share their progress and can be interrupted by a human....
crewai
JavaScript implementation of the Crew AI Framework
Best For
- ✓DevOps teams managing CI/CD pipelines without cloud dependencies
- ✓AI agent orchestration teams needing sovereign infrastructure
- ✓Data engineers building ETL pipelines on-premises
- ✓Solo developers automating multi-step tasks on laptops or servers
- ✓Teams deploying to air-gapped or on-premises infrastructure
- ✓Startups avoiding cloud costs and complexity
- ✓DevOps engineers managing heterogeneous infrastructure (bare metal, VMs, Kubernetes)
- ✓Organizations requiring full control over execution environment
Known Limitations
- ⚠No built-in UI for visual workflow design — YAML editing required
- ⚠DAG validation happens at parse time; runtime dependency injection not supported
- ⚠No native support for dynamic step generation based on runtime data (fan-out patterns require workarounds)
- ⚠YAML syntax errors require manual debugging; no schema validation IDE integration by default
- ⚠No built-in high-availability or automatic failover — single scheduler instance is a potential bottleneck
- ⚠Remote execution via SSH requires pre-configured SSH keys and network connectivity
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Repository Details
Last commit: Apr 22, 2026
About
A lightweight workflow engine built the way it should be: declarative, file-based, self-contained, air-gapped ready. One binary that scales from laptop to distributed cluster. Used as a sovereign AI-agent orchestration infrastructure.
Categories
Alternatives to dagu
Are you the builder of dagu?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →