via “automatic speech recognition with streaming audio input”
Run frontier LLMs and VLMs with day-0 model support across GPU, NPU, and CPU, with comprehensive runtime coverage for PC (Python/C++), mobile (Android & iOS), and Linux/IoT (Arm64 & x86 Docker). Supporting OpenAI GPT-OSS, IBM Granite-4, Qwen-3-VL, Gemma-3n, Ministral-3, and more.
Unique: Streaming ASR architecture with voice activity detection (VAD) processes audio incrementally and skips silence, reducing computation by 30-50% vs batch processing. Hardware acceleration on GPU/NPU for acoustic model inference enables real-time transcription on mobile devices.
vs others: Only on-device ASR framework with streaming input and VAD, whereas Ollama lacks ASR entirely and cloud ASR APIs (Google, Amazon) require network latency, making it the only solution for real-time speech recognition on edge devices without internet.