Dolma vs cua
Side-by-side comparison to help you choose.
| Feature | Dolma | cua |
|---|---|---|
| Type | Dataset | Agent |
| UnfragileRank | 46/100 | 53/100 |
| Adoption | 1 | 1 |
| Quality | 0 | 1 |
| Ecosystem | 0 | 1 |
| Match Graph | 0 | 0 |
| Pricing | Free | Free |
| Capabilities | 10 decomposed | 15 decomposed |
| Times Matched | 0 | 0 |
Aggregates 3 trillion tokens from 7 heterogeneous sources (Common Crawl, The Stack, peS2o, Project Gutenberg, Wikipedia, Wikibooks, C4) into a unified pretraining dataset with published filtering rules, deduplication strategies, and source mixing ratios. The assembly process applies source-specific quality filters and fuzzy deduplication via Duplodocus before combining sources at documented proportions, enabling reproducible dataset composition for LLM training.
Unique: Dolma publishes exact filtering rules, deduplication methods (via Duplodocus fuzzy matching), and source mixing ratios alongside the dataset itself, enabling researchers to independently audit and reproduce curation decisions—a level of transparency uncommon in large pretraining corpora where composition details are typically proprietary
vs alternatives: More transparent and reproducible than proprietary datasets (GPT-3, Chinchilla) and more comprehensively documented than C4 alone, with explicit multi-source composition and published deduplication strategies
Applies ultra-efficient fuzzy deduplication across the 3 trillion token corpus using the Duplodocus tool, which identifies and removes near-duplicate documents within and across source domains without requiring exact string matching. The fuzzy matching approach reduces redundancy while preserving legitimate diversity, operating at scale to handle the full dataset volume without prohibitive computational overhead.
Unique: Duplodocus performs fuzzy (approximate) deduplication rather than exact-match deduplication, enabling removal of near-duplicates and paraphrased content while scaling to 3 trillion tokens; most commodity deduplication tools use exact matching or simple hashing, which miss semantic redundancy
vs alternatives: More efficient than naive pairwise comparison and more comprehensive than exact-match deduplication, though specific algorithmic advantages over MinHash or LSH-based approaches are not documented
Applies domain-specific quality filters and cleaning rules to each of the 7 source corpora using the Datamap-rs tool, which performs large-scale text normalization, content filtering, and quality assessment. The tool enables source-specific filtering strategies (e.g., code quality metrics for The Stack, academic rigor for peS2o) while maintaining computational efficiency across the full 3 trillion token dataset.
Unique: Datamap-rs enables source-specific filtering strategies within a single pipeline, allowing different quality thresholds and content criteria for web text vs. code vs. academic papers vs. books, rather than applying uniform filters across all sources
vs alternatives: More flexible than generic text cleaning tools (e.g., ftfy, NFKD normalization) by supporting domain-specific quality metrics, though specific filtering algorithms and thresholds are not publicly documented
Provides multiple pretraining dataset variants (Standard Pool, Long Context Mix) with different source mixing ratios optimized for different training objectives. The variants are pre-composed and documented, allowing researchers to select a dataset variant matching their training goals without manually adjusting source proportions. The composition strategy reflects decisions about optimal balance between web text, code, academic content, and other domains.
Unique: Dolma provides pre-composed, documented dataset variants with explicit source mixing ratios rather than requiring users to manually combine sources or tune proportions, reducing configuration complexity and enabling reproducible comparisons across research teams
vs alternatives: More structured than ad-hoc dataset composition and more transparent than proprietary models' undocumented mixing strategies, though less flexible than fully customizable composition systems
Enables researchers to trace model outputs back to specific training documents and source domains using the OlmoTrace tool, which maps model predictions to the training data that influenced them. This capability supports interpretability research, bias analysis, and data attribution by linking model behavior to specific training examples and sources within the Dolma corpus.
Unique: OlmoTrace integrates with Dolma's documented source composition and deduplication metadata to enable fine-grained tracing of model behavior to specific training sources, leveraging the dataset's transparency to support interpretability research that would be impossible with proprietary training data
vs alternatives: More practical than generic influence functions because it leverages Dolma's explicit source composition and deduplication metadata; more comprehensive than document-level attribution because it can trace to specific source domains and filtering decisions
Identifies and removes test set data from the pretraining corpus using the Decon tool, which detects overlap between training data and evaluation benchmarks. This prevents data leakage that would artificially inflate model performance on standard benchmarks, ensuring that reported model performance reflects genuine capability rather than memorization of test examples.
Unique: Decon is specifically designed for pretraining dataset curation and integrates with Dolma's documented source composition, enabling systematic detection and removal of benchmark contamination before training rather than post-hoc analysis of model performance
vs alternatives: More proactive than post-training contamination analysis and more comprehensive than manual benchmark checking, though specific detection algorithms and benchmark coverage are not documented
Integrates Dolma with the OlmoCore training framework, which provides fast, easy configuration for pretraining language models with documented data composition, hyperparameters, and training procedures. The framework enables researchers to reproduce model training exactly by specifying dataset variant, mixing ratios, and training configuration, supporting fully reproducible LLM development from data through model weights.
Unique: OlmoCore is designed specifically for reproducible pretraining with Dolma, providing integrated configuration management for dataset composition, deduplication, filtering, and training hyperparameters in a single framework rather than requiring manual orchestration of separate tools
vs alternatives: More integrated and reproducible than generic training frameworks (Hugging Face Transformers, DeepSpeed) because it bundles Dolma's documented data curation with training configuration; more transparent than proprietary training pipelines that don't expose data composition or filtering decisions
Provides the OLMES utility for running reproducible evaluations on models trained with Dolma and OlmoCore, enabling standardized benchmark testing with documented evaluation procedures. The utility ensures consistent evaluation methodology across research teams and model variants, supporting fair performance comparisons and preventing evaluation methodology drift.
Unique: OLMES is designed specifically for evaluating models trained with Dolma and OlmoCore, providing integrated evaluation procedures that document benchmark selection, metric definitions, and evaluation methodology to support reproducible model comparison
vs alternatives: More integrated with Dolma/OlmoCore than generic evaluation frameworks (lm-evaluation-harness) and more transparent about evaluation procedures than proprietary model evaluation, though specific benchmarks and metrics are not documented
+2 more capabilities
Captures desktop screenshots and feeds them to 100+ integrated vision-language models (Claude, GPT-4V, Gemini, local models via adapters) to reason about UI state and determine appropriate next actions. Uses a unified message format (Responses API) across heterogeneous model providers, enabling the agent to understand visual context and generate structured action commands without brittle selector-based logic.
Unique: Implements a unified Responses API message format abstraction layer that normalizes outputs from 100+ heterogeneous VLM providers (native computer-use models like Claude, composed models via grounding adapters, and local model adapters), eliminating provider-specific parsing logic and enabling seamless model swapping without agent code changes.
vs alternatives: Broader model coverage and provider flexibility than Anthropic's native computer-use API alone, with explicit support for local/open-source models and a standardized message format that decouples agent logic from model implementation details.
Provisions isolated execution environments across macOS (via Lume VMs), Linux (Docker), Windows (Windows Sandbox), and host OS, with unified provider abstraction. Handles VM/container lifecycle (creation, snapshot management, cleanup), resource allocation, and OS-specific action handlers (keyboard/mouse events, clipboard, file system access) through a pluggable provider architecture that abstracts platform differences.
Unique: Implements a pluggable provider architecture with unified Computer interface that abstracts OS-specific action handlers (macOS native events via Lume, Linux X11/Wayland via Docker, Windows input simulation via Windows Sandbox API), enabling single agent code to target multiple platforms. Includes Lume VM management with snapshot/restore capabilities for deterministic testing.
vs alternatives: More comprehensive OS coverage than single-platform solutions; Lume provider offers native macOS VM support with snapshot capabilities unavailable in Docker-only alternatives, while unified provider abstraction reduces code duplication vs. platform-specific agent implementations.
cua scores higher at 53/100 vs Dolma at 46/100. Dolma leads on adoption, while cua is stronger on quality and ecosystem.
Need something different?
Search the match graph →© 2026 Unfragile. Stronger through disorder.
Provides Lume provider for provisioning and managing macOS virtual machines with native support for snapshot creation, restoration, and cleanup. Handles VM lifecycle (boot, shutdown, resource allocation) with optimized startup times. Integrates with image registry for VM image management and caching. Supports both Apple Silicon and Intel Macs. Enables deterministic testing through snapshot-based environment reset between agent runs.
Unique: Implements Lume provider with native macOS VM management including snapshot/restore capabilities for deterministic testing, optimized startup times, and image registry integration. Supports both Apple Silicon and Intel Macs with unified provider interface.
vs alternatives: More efficient than Docker for macOS because Lume uses native virtualization (Virtualization Framework) vs. Docker's slower emulation; snapshot/restore enables faster environment reset vs. full VM recreation.
Provides command-line interface (CLI) for quick-start agent execution, configuration, and testing without writing code. Includes Gradio-based web UI for interactive agent control, real-time monitoring, and trajectory visualization. CLI supports task specification, model selection, environment configuration, and result export. Web UI enables non-technical users to run agents and view execution traces with HUD visualization.
Unique: Implements both CLI and Gradio web UI for agent execution, with CLI supporting quick-start scenarios and web UI enabling interactive control and real-time monitoring with HUD visualization. Reduces barrier to entry for non-technical users.
vs alternatives: More accessible than SDK-only frameworks because CLI and web UI enable non-developers to run agents; Gradio integration provides quick UI prototyping vs. custom web development.
Implements Docker provider for running agents in containerized Linux environments with full isolation. Handles container lifecycle (creation, cleanup), image management, and volume mounting for persistent storage. Supports custom Dockerfiles for environment customization. Provides X11/Wayland display server integration for GUI application interaction. Enables reproducible agent execution across different host systems.
Unique: Implements Docker provider with X11/Wayland display server integration for GUI application interaction, container lifecycle management, and custom Dockerfile support. Enables reproducible agent execution across different host systems with container isolation.
vs alternatives: More lightweight than VMs because Docker uses container isolation vs. full virtualization; X11 integration enables GUI application support vs. headless-only alternatives.
Implements Windows Sandbox provider for isolated agent execution on Windows 10/11 Pro/Enterprise, and host provider for direct OS execution. Windows Sandbox provider creates ephemeral sandboxed environments with automatic cleanup. Host provider enables direct agent execution on live Windows system without isolation. Both providers support native Windows input simulation (SendInput API) and clipboard operations. Handles Windows-specific action execution (window management, registry access).
Unique: Implements both Windows Sandbox provider (ephemeral isolated environments with automatic cleanup) and host provider (direct OS execution) with native Windows input simulation (SendInput API) and clipboard support. Handles Windows-specific action execution including window management.
vs alternatives: Windows Sandbox provides better isolation than host execution while avoiding VM overhead; native SendInput API enables more reliable input simulation than generic input methods.
Implements comprehensive telemetry and logging infrastructure capturing agent execution metrics (latency, token usage, action success rate), errors, and performance data. Supports structured logging with contextual information (task ID, agent ID, timestamp). Integrates with external monitoring systems (e.g., Datadog, CloudWatch) for centralized observability. Provides error categorization and automatic error recovery suggestions. Enables debugging through detailed execution logs with configurable verbosity levels.
Unique: Implements structured telemetry and logging system with contextual information (task ID, agent ID, timestamp), error categorization, and automatic error recovery suggestions. Integrates with external monitoring systems for centralized observability.
vs alternatives: More comprehensive than basic logging because it captures metrics and structured context; integration with external monitoring enables centralized observability vs. log file analysis.
Implements the core agent loop (screenshot → LLM reasoning → action execution → repeat) via the ComputerAgent class, with pluggable callback system and custom loop support. Developers can override loop behavior at multiple extension points: custom agent loops (modify reasoning/action selection), custom tools (add domain-specific actions), and callback hooks (inject monitoring/logging). Supports both synchronous and asynchronous execution patterns.
Unique: Provides a callback-based extension system with multiple hook points (pre/post action, loop iteration, error handling) and explicit support for custom agent loop subclassing, allowing developers to override core loop logic without forking the framework. Supports both native computer-use models and composed models with grounding adapters.
vs alternatives: More flexible than frameworks with fixed loop logic; callback system enables non-invasive monitoring/logging vs. requiring loop subclassing, while custom loop support accommodates novel agent architectures that standard loops cannot express.
+7 more capabilities