Capability
Gpu Acceleration With Cuda And Rocm Support
2 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
Single-file executable LLMs — bundle model + inference, runs on any OS with zero install.
Unique: Automatically detects and routes tensor operations to CUDA or ROCm kernels at runtime, with build-time selection of GPU backend, enabling single binary to leverage GPU acceleration without code changes
vs others: Faster inference than CPU-only execution (5-20x speedup on modern GPUs) because matrix multiplications run on GPU cores, versus CPU alternatives limited by single-thread performance