via “audio transcription with whisper-compatible endpoints”
LocalAI is the open-source AI engine. Run any model - LLMs, vision, voice, image, video - on any hardware. No GPU required.
Unique: Implements OpenAI-compatible /v1/audio/transcriptions endpoint with pluggable Whisper backends (whisper.cpp for speed, whisperx for speaker diarization), supporting multiple audio formats and automatic language detection. Backend selection enables speed/accuracy trade-offs without changing client code.
vs others: Unlike cloud Whisper API (latency, cost, data privacy) or single-backend solutions, LocalAI's pluggable architecture enables choosing between fast transcription (whisper.cpp) and feature-rich transcription with speaker diarization (whisperx) based on use case.