PEFTFramework44/100
Parameter-efficient fine-tuning — LoRA, QLoRA, adapter methods for LLMs on consumer GPUs.
Unique: Implements prompt/prefix learning by freezing all model weights and training only learnable embedding vectors prepended to inputs (prompt tuning) or injected into layer hidden states (prefix tuning). Achieves extreme parameter efficiency by avoiding weight modification entirely, reducing trainable parameters to thousands compared to millions for LoRA.
vs others: Achieves 10-100x smaller trainable parameter count than LoRA (thousands vs millions) but with 5-15% performance degradation, making it suitable for extreme parameter efficiency scenarios where LoRA is still too large.