Capability
Peft Lora Fine Tuning Integration For Quantized Models
20 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
via “fine-tuning and parameter-efficient adaptation (lora/qlora)”
text-generation model by undefined. 1,05,91,422 downloads.
Unique: Qwen2.5-1.5B's small size makes it ideal for LoRA fine-tuning on consumer hardware; the model's instruction-tuning baseline reduces the amount of task-specific data needed for effective adaptation. QLoRA support enables fine-tuning on 4GB GPUs, democratizing model customization.
vs others: LoRA fine-tuning is 10-100x faster and cheaper than full fine-tuning of larger models; QLoRA enables fine-tuning on consumer GPUs where 7B+ models would require enterprise hardware.