via “image generation with stable diffusion and latent diffusion models”
Run frontier LLMs and VLMs with day-0 model support across GPU, NPU, and CPU, with comprehensive runtime coverage for PC (Python/C++), mobile (Android & iOS), and Linux/IoT (Arm64 & x86 Docker). Supporting OpenAI GPT-OSS, IBM Granite-4, Qwen-3-VL, Gemma-3n, Ministral-3, and more.
Unique: Image generation plugin architecture separates text encoding (CLIP), latent diffusion, and VAE decoding into independent stages, enabling hardware-specific routing (text encoding on NPU, diffusion on GPU, VAE on CPU) for heterogeneous device optimization.
vs others: Only on-device image generation framework supporting NPU acceleration for text encoding and diffusion steps, whereas Ollama lacks image generation entirely and Stable Diffusion WebUI runs on GPU only, making it the only true edge-compatible image generation solution.