Gemma 9B Pretrained Translation Models
Collection
This is a collection of models that are fine-tuned for translation, based on the pretrained Gemma 9B model.
•
5 items
•
Updated
This repository contains the LoRA adapters used for fine-tuning.
peft_config = LoraConfig(
r=128,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",
"embed_tokens", "lm_head"],
lora_alpha=32,
bias="none",
)
Base model
google/gemma-2-9b