Capability
Gradient Accumulation With Distributed Synchronization
2 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
Easy distributed training — abstracts PyTorch distributed, DeepSpeed, FSDP behind simple API.
Unique: Provides a unified gradient_accumulation_steps parameter that abstracts backend-specific synchronization (DDP's no_sync, DeepSpeed's native accumulation, FSDP's reduce-scatter deferral) rather than requiring users to manually manage synchronization context, reducing misconfiguration risk
vs others: Simpler than manual no_sync context management and more efficient than naive accumulation (which synchronizes every step); automatically selects backend-optimal synchronization strategy