via “openai-compatible rest api endpoint translation”
LocalAI is the open-source AI engine. Run any model - LLMs, vision, voice, image, video - on any hardware. No GPU required.
Unique: Implements full OpenAI API surface (chat, completions, embeddings, images, audio, vision) as a stateless Go HTTP server that routes to pluggable gRPC backends, rather than wrapping a single inference engine. This polyglot backend architecture allows swapping inference implementations (llama.cpp, Python diffusers, whisper) without changing the API contract.
vs others: Unlike Ollama (single-model focus) or vLLM (GPU-centric), LocalAI's gRPC backend abstraction enables running heterogeneous model types (LLM + vision + audio) on the same server with independent resource management, and works on CPU-only hardware.