distributed task execution with actor model and compiled dags
Ray Core executes Python functions and classes as distributed tasks across a cluster using an actor model with optional compiled DAG acceleration. Tasks are submitted to Raylets (per-node schedulers) which manage local execution, while the Global Control Store (GCS) coordinates cluster state. Compiled DAGs bypass the task submission overhead by pre-planning execution graphs, enabling near-native performance for complex workflows without serialization delays.
Unique: Combines actor model with compiled DAG acceleration and per-node Raylet schedulers, enabling both stateful long-lived services and optimized batch execution in a single framework. The object store uses Apache Arrow for zero-copy serialization, reducing memory overhead vs traditional distributed systems.
vs alternatives: Faster than Dask for complex stateful workloads due to actor persistence; more flexible than Spark for arbitrary Python code without DataFrame constraints; lower latency than Kubernetes Job orchestration due to in-process scheduling.
distributed data processing with streaming execution and resource-aware scheduling
Ray Data provides a distributed DataFrame-like API (Dataset) that executes transformations (map, filter, groupby, aggregate) in streaming fashion across cluster nodes. Unlike batch systems, Ray Data schedules tasks based on available resources and data locality, pulling data through the object store in chunks. Supports multiple data sources (Parquet, CSV, S3, Delta Lake) and sinks, with automatic partitioning and lazy evaluation until .materialize() or action calls trigger execution.
Unique: Uses streaming execution with resource-aware scheduling (respects CPU/GPU/memory constraints per task) rather than bulk batch processing. Integrates with Ray's object store for zero-copy data passing and supports LLM-specific loaders (HuggingFace, LLaMA Index) for training corpus preparation.
vs alternatives: Faster than Spark for unstructured data and ML preprocessing due to streaming + resource awareness; more flexible than Pandas for distributed operations; tighter integration with Ray Train/Serve for end-to-end ML pipelines.
batch inference with ray data and model serving integration
Ray Data enables large-scale batch inference by applying a model to a distributed dataset. Users define a UDF (user-defined function) that loads a model and applies it to batches of data, then use Ray Data's map() to parallelize across partitions. Integrates with Ray Serve for serving the same model as an HTTP endpoint, enabling code reuse between batch and online inference. Supports automatic batching, GPU allocation per task, and result writing to cloud storage.
Unique: Integrates Ray Data's distributed dataset API with Ray Serve's model serving, enabling the same model code to be used for batch inference (via map UDFs) and online serving (via HTTP endpoints). Automatic GPU allocation per task enables efficient inference on heterogeneous hardware.
vs alternatives: More flexible than Spark MLlib for custom inference logic; simpler than Kubernetes batch jobs for distributed inference; tighter integration with Ray Serve for online/batch model serving.
job submission and scheduling with resource isolation and priority queues
Ray Jobs API allows submitting Python scripts or functions as isolated jobs to a Ray cluster, with automatic resource allocation and priority-based scheduling. Each job runs in its own namespace with isolated actor/task state, preventing interference between concurrent jobs. Jobs can be submitted via CLI (ray job submit) or Python API, with support for dependency specification (runtime environments) and result retrieval. Integrates with Ray's autoscaler for automatic cluster scaling based on job resource requirements.
Unique: Jobs API provides logical isolation via namespaces, preventing actor/task name collisions between concurrent jobs. Integrates with Ray's autoscaler to automatically scale cluster based on job resource requirements, enabling efficient multi-tenant resource sharing.
vs alternatives: Simpler than Kubernetes Jobs for Ray workload submission; more flexible than Slurm for ML-specific job management; tighter integration with Ray's resource management than external job schedulers.
global control store (gcs) for cluster state management and coordination
Ray's Global Control Store (GCS) is a distributed metadata service (built on Redis) that maintains cluster state: node membership, task/actor metadata, object locations, and job status. All Ray components (head node, Raylets, workers) query GCS for cluster topology and coordinate via GCS. Enables features like task scheduling (Raylets query GCS for available nodes), object location tracking (workers find objects via GCS), and fault recovery (GCS detects node failures and triggers task re-submission).
Unique: GCS serves as a centralized metadata service for distributed coordination, enabling Raylets to make scheduling decisions based on global cluster state without direct communication. Integrates with Ray's fault detection to automatically re-submit tasks when nodes fail.
vs alternatives: More efficient than peer-to-peer coordination for large clusters; simpler than Zookeeper for Ray-specific coordination; tighter integration with Ray's task scheduler and object store.
kubernetes integration via kuberay for native cluster deployment
KubeRay is a Kubernetes operator that manages Ray clusters as Kubernetes custom resources (RayCluster). Enables declarative Ray cluster definition via YAML, automatic node scaling via Kubernetes HPA, and integration with Kubernetes networking and storage. KubeRay handles Ray head node and worker pod lifecycle, including health checks, rolling updates, and resource requests/limits. Supports Ray Jobs API for job submission to KubeRay-managed clusters.
Unique: KubeRay implements Kubernetes operator pattern for Ray cluster management, enabling declarative cluster definition and native Kubernetes integration (networking, storage, RBAC). Supports both Ray's native autoscaler and Kubernetes HPA for flexible scaling strategies.
vs alternatives: More Kubernetes-native than Ray's cloud autoscaler; simpler than manual Kubernetes deployment manifests; tighter integration with Kubernetes ecosystem (Istio, Prometheus, etc.).
distributed model training with framework integration and fault tolerance
Ray Train (v2) abstracts distributed training across PyTorch, TensorFlow, and HuggingFace Transformers using a controller-worker architecture. The controller coordinates training state and checkpointing, while workers execute training loops with automatic distributed data loading. Supports multi-node distributed training (DDP, DeepSpeed), automatic fault recovery via checkpointing, and integration with Ray Tune for hyperparameter search. Handles dependency installation via runtime environments and GPU/CPU resource allocation.
Unique: Train v2 uses a controller-worker pattern where the controller manages state and checkpointing separately from worker training loops, enabling fault recovery without pausing training. Integrates runtime environments for automatic dependency installation across nodes and supports mixed-precision training via framework-native APIs.
vs alternatives: Simpler than raw PyTorch DDP for multi-node setups (no manual rank/world_size management); more flexible than Hugging Face Accelerate for heterogeneous clusters; tighter integration with Ray Tune for AutoML workflows.
hyperparameter tuning with search algorithms and trial scheduling
Ray Tune executes hyperparameter search by spawning multiple training trials (each a Ray actor) and scheduling them based on available resources. Supports multiple search algorithms (grid, random, Bayesian optimization via Optuna, population-based training) and early stopping schedulers (ASHA, median stopping rule). Each trial reports metrics back to Tune's trial manager, which decides whether to continue, pause, or terminate based on scheduler logic. Integrates with Ray Train for distributed training trials and Ray Serve for model evaluation.
Unique: Combines multiple search algorithms (grid, random, Bayesian, PBT) in a unified trial scheduling framework where the scheduler controls trial lifecycle (pause/resume/terminate) based on reported metrics. ASHA scheduler implements successive halving to eliminate poor trials exponentially, reducing wasted compute.
vs alternatives: More efficient than grid search due to early stopping and adaptive scheduling; more flexible than Optuna standalone for distributed trials; tighter integration with Ray Train for multi-node training trials.
+6 more capabilities