Capability
Adapter Based Parameter Efficient Fine Tuning For Llms And Speech Models
20 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
via “fine-tuning and parameter-efficient adaptation”
text-generation model by undefined. 70,29,937 downloads.
Unique: OPT's small size (125M) makes full fine-tuning accessible on consumer hardware, and its permissive license enables commercial fine-tuning without restrictions, unlike some proprietary models; PEFT integration provides LoRA/prefix-tuning out-of-the-box
vs others: Easier to fine-tune than GPT-3 (no API restrictions, full weight access), but produces lower-quality adapted models than larger models; better for cost-sensitive fine-tuning than quality-critical applications