LiquidAI: LFM2.5-1.2B-Instruct (free)Model24/100 via “instruction-tuned response generation with system prompting”
LFM2.5-1.2B-Instruct is a compact, high-performance instruction-tuned model built for fast on-device AI. It delivers strong chat quality in a 1.2B parameter footprint, with efficient edge inference and broad runtime support.
Unique: Leverages instruction-tuning specifically optimized for the 1.2B parameter scale, where instruction-following is achieved through supervised fine-tuning rather than in-context learning, making it more reliable for edge deployment where context window is limited
vs others: More reliable instruction-following than base models due to explicit fine-tuning, but less flexible than larger models (7B+) that can learn instructions from examples; better suited for fixed instruction sets than dynamic prompt engineering