Post
1376
LoRA with code π using PEFT (parameter efficient fine-tuning)
LoRA (Low-Rank Adaptation)
LoRA adds low-rank matrices to specific layers and reduce the number of trainable parameters for efficient fine-tuning.
Code:
Please install these libraries first:
pip install peft
pip install datasets
pip install transformers
LoRA adds low-rank matrices to fine-tune only a small portion of the model and reduces training overhead by training fewer parameters.
We can perform efficient fine-tuning with minimal impact on accuracy and its suitable for large models where full-precision training is still feasible.
LoRA (Low-Rank Adaptation)
LoRA adds low-rank matrices to specific layers and reduce the number of trainable parameters for efficient fine-tuning.
Code:
Please install these libraries first:
pip install peft
pip install datasets
pip install transformers
from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments
from peft import LoraConfig, get_peft_model
from datasets import load_dataset
# Loading the pre-trained BERT model
model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2)
# Configuring the LoRA parameters
lora_config = LoraConfig(
r=8,
lora_alpha=16,
lora_dropout=0.1,
bias="none"
)
# Applying LoRA to the model
model = get_peft_model(model, lora_config)
# Loading dataset for classification
dataset = load_dataset("glue", "sst2")
train_dataset = dataset["train"]
# Setting the training arguments
training_args = TrainingArguments(
output_dir="./results",
per_device_train_batch_size=16,
num_train_epochs=3,
logging_dir="./logs",
)
# Creating a Trainer instance for fine-tuning
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
)
# Finally we can fine-tune the model
trainer.train()
LoRA adds low-rank matrices to fine-tune only a small portion of the model and reduces training overhead by training fewer parameters.
We can perform efficient fine-tuning with minimal impact on accuracy and its suitable for large models where full-precision training is still feasible.