Capability
Memory Efficient Training With Gradient Checkpointing
20 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
via “memory-efficient inference with attention slicing and gradient checkpointing”
text-to-image model by undefined. 15,28,067 downloads.
Unique: Provides optional attention slicing and gradient checkpointing as first-class pipeline features, enabling fine-grained memory-compute tradeoffs without code changes; slicing is applied transparently during inference
vs others: More flexible than fixed memory budgets; attention slicing is simpler than custom kernels (xFormers) but less efficient; gradient checkpointing is standard PyTorch but requires explicit enablement