LoRA Adapters

This repository contains the LoRA adapters used for fine-tuning.

Details

  • Base Model: Bronsn/gemma-9b-luganda-pretrained
  • Contains LoRA adapter weights
  • Compatible with PEFT library

Configuration

peft_config = LoraConfig(
    r=128,
    target_modules=["q_proj", "k_proj", "v_proj", "o_proj",
                   "gate_proj", "up_proj", "down_proj",
                   "embed_tokens", "lm_head"],
    lora_alpha=32,
    bias="none",
)
Downloads last month
11
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support translation models for peft library.

Model tree for Bronsn/luganda-english-translation-lora

Base model

google/gemma-2-9b
Adapter
(1)
this model

Collection including Bronsn/luganda-english-translation-lora