RT-2 vs cua
Side-by-side comparison to help you choose.
| Feature | RT-2 | cua |
|---|---|---|
| Type | Model | Agent |
| UnfragileRank | 42/100 | 53/100 |
| Adoption | 1 | 1 |
| Quality | 0 | 1 |
| Ecosystem | 0 | 1 |
| Match Graph | 0 | 0 |
| Pricing | Free | Free |
| Capabilities | 10 decomposed | 15 decomposed |
| Times Matched | 0 | 0 |
RT-2 maps robot observations (images) and natural language commands directly to executable robot actions by leveraging a transformer-based vision-language-action architecture that co-trains on Internet-scale vision-language data alongside robot trajectory data. Actions are represented as discrete text tokens integrated into the language model's vocabulary, enabling the model to reason about visual scenes and language semantically before outputting action sequences. This approach transfers web-scale knowledge (VQA, visual reasoning) to robotic control without requiring explicit action space engineering.
Unique: Represents robot actions as discrete text tokens within the language model vocabulary, enabling joint training on Internet-scale vision-language tasks (VQA, visual reasoning) alongside robot trajectories — this co-training approach transfers web-scale semantic knowledge directly to robotic control without separate action space modules or explicit policy networks.
vs alternatives: Achieves better generalization to novel objects and out-of-distribution commands than prior robot learning approaches by leveraging pre-trained vision-language models' semantic understanding, rather than training robot policies from scratch on limited robot data.
RT-2 generalizes to natural language commands not present in its robot training data by applying semantic reasoning learned from Internet-scale vision-language tasks. The model interprets novel command phrasings (e.g., 'place object on the icon' or 'on the number 5') by decomposing them into visual and semantic concepts it has learned from VQA and general vision-language co-training, then mapping those concepts to appropriate robot actions. This capability emerges from the co-training approach rather than explicit command parsing or semantic slot-filling.
Unique: Achieves out-of-distribution command understanding through co-training on Internet-scale vision-language tasks rather than explicit semantic parsing or slot-filling — the model learns to map novel command phrasings to actions by reasoning about visual and semantic concepts learned from VQA and general vision-language data.
vs alternatives: Outperforms template-based or slot-filling approaches for novel command phrasings because it leverages semantic understanding from web-scale vision-language pre-training rather than relying on hand-crafted command grammars or limited robot-specific training data.
RT-2 performs chain-of-thought reasoning over visual observations and natural language instructions to decompose complex manipulation tasks into sub-goals and select appropriate actions. For example, when instructed to 'use an improvised hammer to break something,' the model reasons about which object could serve as a hammer, how to grasp it, and how to apply it — this reasoning emerges from the transformer's ability to process visual and linguistic context jointly. The text-token action representation allows the model to express intermediate reasoning steps as part of the action sequence.
Unique: Encodes multi-stage reasoning as part of the action token sequence rather than as separate planning or reasoning modules — the transformer jointly processes visual observations, language instructions, and intermediate reasoning steps to produce coherent multi-step action plans.
vs alternatives: Integrates reasoning and action planning end-to-end within a single transformer model, avoiding the need for separate planning modules or explicit task decomposition logic, and leveraging semantic understanding from vision-language pre-training to reason about novel task scenarios.
RT-2 selects objects based on comparative properties (smallest, largest, closest to another object, matching a description) by reasoning about visual relationships and semantic attributes. The model processes the visual scene, understands the comparative property being requested, and identifies the target object — this capability emerges from vision-language pre-training on tasks like VQA that require comparative reasoning. The selected object is then grounded to robot actions for manipulation.
Unique: Performs comparative reasoning over visual scenes without explicit object detection or segmentation modules — the vision-language transformer jointly processes the image and comparative instruction to identify and select the target object as part of end-to-end action prediction.
vs alternatives: Avoids the need for separate object detection, classification, and comparison modules by leveraging semantic understanding from vision-language pre-training, enabling more flexible and generalizable object selection compared to template-based or rule-based approaches.
RT-2 adapts robot behavior based on contextual information inferred from visual observations and task descriptions. For example, when instructed to 'select an appropriate drink for a sleepy person,' the model reasons about the person's state, the available drinks, and task-specific appropriateness — this contextual reasoning emerges from the vision-language pre-training's ability to understand human states, object properties, and task semantics. The model then selects and manipulates the appropriate object.
Unique: Infers task context and adapts behavior through joint vision-language reasoning rather than explicit context modeling or rule-based adaptation — the transformer learns to understand contextual appropriateness from vision-language pre-training and applies it to robot action selection.
vs alternatives: Enables context-aware robot behavior without explicit context representation or rule engineering by leveraging semantic understanding from web-scale vision-language pre-training, allowing more natural and flexible adaptation to diverse task scenarios.
RT-2 generalizes to object categories not seen during robot training by leveraging semantic understanding from Internet-scale vision-language pre-training. When encountering a novel object, the model recognizes its visual features and semantic properties (learned from web-scale data), maps those properties to appropriate manipulation strategies, and executes actions — this transfer occurs without explicit fine-tuning on the novel object category. The co-training approach ensures that visual and semantic knowledge from web-scale data directly informs robot action selection.
Unique: Transfers semantic and visual understanding from Internet-scale vision-language pre-training directly to novel object manipulation without explicit fine-tuning — the co-training approach ensures that web-scale knowledge informs action selection for unseen object categories.
vs alternatives: Achieves better generalization to novel objects than robot-specific training approaches because it leverages semantic understanding from web-scale vision-language data, reducing dependence on comprehensive robot training data for every object category.
RT-2 is trained through a co-training approach that jointly optimizes on Internet-scale vision-language tasks (VQA, visual reasoning) and robot trajectory data, maintaining some original vision-language data during training. This approach transfers semantic and visual understanding from web-scale data to robotic control by representing actions as text tokens integrated into the language model vocabulary. The co-training ensures that the model learns generalizable visual and semantic concepts before specializing to robot-specific action prediction.
Unique: Co-trains on Internet-scale vision-language tasks alongside robot trajectory data, maintaining some original vision-language data during training to preserve semantic understanding — this approach integrates actions as text tokens into the language model vocabulary, enabling joint optimization across vision, language, and action modalities.
vs alternatives: Achieves better generalization and sample efficiency than robot-only training by leveraging Internet-scale vision-language knowledge, and avoids the need for separate vision, language, and action modules by representing actions as text tokens within a unified transformer architecture.
RT-2 represents robot actions as discrete text tokens integrated into the language model's vocabulary, enabling the model to predict actions using the same token prediction mechanism as language generation. This approach allows actions to be expressed alongside natural language reasoning and intermediate steps, and leverages the transformer's language modeling capabilities for action prediction. Actions are decoded from text tokens into robot-specific motor commands through an integration layer.
Unique: Represents robot actions as discrete text tokens within the language model vocabulary rather than as separate continuous or discrete action outputs — this enables joint reasoning over vision, language, and actions within a unified transformer architecture.
vs alternatives: Integrates action prediction with language reasoning and intermediate steps within a single model, avoiding the need for separate action modules and enabling more natural expression of multi-step reasoning compared to models with separate action heads or policy networks.
+2 more capabilities
Captures desktop screenshots and feeds them to 100+ integrated vision-language models (Claude, GPT-4V, Gemini, local models via adapters) to reason about UI state and determine appropriate next actions. Uses a unified message format (Responses API) across heterogeneous model providers, enabling the agent to understand visual context and generate structured action commands without brittle selector-based logic.
Unique: Implements a unified Responses API message format abstraction layer that normalizes outputs from 100+ heterogeneous VLM providers (native computer-use models like Claude, composed models via grounding adapters, and local model adapters), eliminating provider-specific parsing logic and enabling seamless model swapping without agent code changes.
vs alternatives: Broader model coverage and provider flexibility than Anthropic's native computer-use API alone, with explicit support for local/open-source models and a standardized message format that decouples agent logic from model implementation details.
Provisions isolated execution environments across macOS (via Lume VMs), Linux (Docker), Windows (Windows Sandbox), and host OS, with unified provider abstraction. Handles VM/container lifecycle (creation, snapshot management, cleanup), resource allocation, and OS-specific action handlers (keyboard/mouse events, clipboard, file system access) through a pluggable provider architecture that abstracts platform differences.
Unique: Implements a pluggable provider architecture with unified Computer interface that abstracts OS-specific action handlers (macOS native events via Lume, Linux X11/Wayland via Docker, Windows input simulation via Windows Sandbox API), enabling single agent code to target multiple platforms. Includes Lume VM management with snapshot/restore capabilities for deterministic testing.
vs alternatives: More comprehensive OS coverage than single-platform solutions; Lume provider offers native macOS VM support with snapshot capabilities unavailable in Docker-only alternatives, while unified provider abstraction reduces code duplication vs. platform-specific agent implementations.
cua scores higher at 53/100 vs RT-2 at 42/100. RT-2 leads on adoption, while cua is stronger on quality and ecosystem.
Need something different?
Search the match graph →© 2026 Unfragile. Stronger through disorder.
Provides Lume provider for provisioning and managing macOS virtual machines with native support for snapshot creation, restoration, and cleanup. Handles VM lifecycle (boot, shutdown, resource allocation) with optimized startup times. Integrates with image registry for VM image management and caching. Supports both Apple Silicon and Intel Macs. Enables deterministic testing through snapshot-based environment reset between agent runs.
Unique: Implements Lume provider with native macOS VM management including snapshot/restore capabilities for deterministic testing, optimized startup times, and image registry integration. Supports both Apple Silicon and Intel Macs with unified provider interface.
vs alternatives: More efficient than Docker for macOS because Lume uses native virtualization (Virtualization Framework) vs. Docker's slower emulation; snapshot/restore enables faster environment reset vs. full VM recreation.
Provides command-line interface (CLI) for quick-start agent execution, configuration, and testing without writing code. Includes Gradio-based web UI for interactive agent control, real-time monitoring, and trajectory visualization. CLI supports task specification, model selection, environment configuration, and result export. Web UI enables non-technical users to run agents and view execution traces with HUD visualization.
Unique: Implements both CLI and Gradio web UI for agent execution, with CLI supporting quick-start scenarios and web UI enabling interactive control and real-time monitoring with HUD visualization. Reduces barrier to entry for non-technical users.
vs alternatives: More accessible than SDK-only frameworks because CLI and web UI enable non-developers to run agents; Gradio integration provides quick UI prototyping vs. custom web development.
Implements Docker provider for running agents in containerized Linux environments with full isolation. Handles container lifecycle (creation, cleanup), image management, and volume mounting for persistent storage. Supports custom Dockerfiles for environment customization. Provides X11/Wayland display server integration for GUI application interaction. Enables reproducible agent execution across different host systems.
Unique: Implements Docker provider with X11/Wayland display server integration for GUI application interaction, container lifecycle management, and custom Dockerfile support. Enables reproducible agent execution across different host systems with container isolation.
vs alternatives: More lightweight than VMs because Docker uses container isolation vs. full virtualization; X11 integration enables GUI application support vs. headless-only alternatives.
Implements Windows Sandbox provider for isolated agent execution on Windows 10/11 Pro/Enterprise, and host provider for direct OS execution. Windows Sandbox provider creates ephemeral sandboxed environments with automatic cleanup. Host provider enables direct agent execution on live Windows system without isolation. Both providers support native Windows input simulation (SendInput API) and clipboard operations. Handles Windows-specific action execution (window management, registry access).
Unique: Implements both Windows Sandbox provider (ephemeral isolated environments with automatic cleanup) and host provider (direct OS execution) with native Windows input simulation (SendInput API) and clipboard support. Handles Windows-specific action execution including window management.
vs alternatives: Windows Sandbox provides better isolation than host execution while avoiding VM overhead; native SendInput API enables more reliable input simulation than generic input methods.
Implements comprehensive telemetry and logging infrastructure capturing agent execution metrics (latency, token usage, action success rate), errors, and performance data. Supports structured logging with contextual information (task ID, agent ID, timestamp). Integrates with external monitoring systems (e.g., Datadog, CloudWatch) for centralized observability. Provides error categorization and automatic error recovery suggestions. Enables debugging through detailed execution logs with configurable verbosity levels.
Unique: Implements structured telemetry and logging system with contextual information (task ID, agent ID, timestamp), error categorization, and automatic error recovery suggestions. Integrates with external monitoring systems for centralized observability.
vs alternatives: More comprehensive than basic logging because it captures metrics and structured context; integration with external monitoring enables centralized observability vs. log file analysis.
Implements the core agent loop (screenshot → LLM reasoning → action execution → repeat) via the ComputerAgent class, with pluggable callback system and custom loop support. Developers can override loop behavior at multiple extension points: custom agent loops (modify reasoning/action selection), custom tools (add domain-specific actions), and callback hooks (inject monitoring/logging). Supports both synchronous and asynchronous execution patterns.
Unique: Provides a callback-based extension system with multiple hook points (pre/post action, loop iteration, error handling) and explicit support for custom agent loop subclassing, allowing developers to override core loop logic without forking the framework. Supports both native computer-use models and composed models with grounding adapters.
vs alternatives: More flexible than frameworks with fixed loop logic; callback system enables non-invasive monitoring/logging vs. requiring loop subclassing, while custom loop support accommodates novel agent architectures that standard loops cannot express.
+7 more capabilities