A collection of methods that have been implemented in the 🤗 PEFT library
-
Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning
Paper • 2303.10512 • Published • 2 -
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
Paper • 2205.05638 • Published • 3 -
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Paper • 2303.16199 • Published • 4 -
FedPara: Low-Rank Hadamard Product for Communication-Efficient Federated Learning
Paper • 2108.06098 • Published • 2