Capability
Hardware Accelerator Delegation Via Execution Providers
2 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
Cross-platform ONNX inference for mobile devices.
Unique: Implements transparent graph partitioning with automatic CPU fallback — if an operator isn't supported by the selected accelerator, the runtime silently keeps it on CPU rather than failing, enabling models to run across device generations without modification. This is more robust than TensorFlow Lite's approach, which requires manual operator whitelisting.
vs others: More flexible than native CoreML/NNAPI because it provides a unified API across iOS and Android with automatic fallback, whereas native frameworks require platform-specific code and fail if operators are unsupported.