--- language: - fi library_name: peft base_model: mpasila/gpt3-finnish-8B-gptq-4bit license: apache-2.0 datasets: - Finnish-NLP/Capybara-fi-deepl-translated-sft - mpasila/Capybara-fi-deepl-translated-sft-alpaca --- # Model Card for Capybara-Finnish-V1-8B-LoRA LoRA trained using [mpasila/gpt3-finnish-8B-gptq-4bit](https://huggingface.co/mpasila/gpt3-finnish-8B-gptq-4bit/) as the base model. Also the quantized model is based on this [TurkuNLP/gpt3-finnish-8B](https://huggingface.co/TurkuNLP/gpt3-finnish-8B/). Dataset used with the LoRA is [Finnish-NLP/Capybara-fi-deepl-translated-sft](https://huggingface.co/datasets/Finnish-NLP/Capybara-fi-deepl-translated-sft/) with some modifications so it uses Alpaca formatting [modified dataset](https://huggingface.co/datasets/mpasila/Capybara-fi-deepl-translated-sft-alpaca/). It uses Alpaca format but with a translated instruction at the start: ``` { "instruction,output": "Alla on ohje, jossa kuvataan tehtävä. Kirjoita vastaus, joka täyttää pyynnön asianmukaisesti.\n\n### Instruction:\n%instruction%\n\n### Response:\n%output%", "instruction,input,output": "Alla on ohje, jossa kuvataan tehtävä ja joka on yhdistetty kontekstia lisäävään syötteeseen. Kirjoita vastaus, joka täyttää pyynnön asianmukaisesti.\n\n### Instruction:\n%instruction%\n\n### Input:\n%input%\n\n### Response:\n%output%" } ``` Using the following settings: ```json { "lora_name": "Capybara_Finnish_V1", "always_override": false, "q_proj_en": true, "v_proj_en": true, "k_proj_en": false, "o_proj_en": false, "gate_proj_en": false, "down_proj_en": false, "up_proj_en": false, "save_steps": 250.0, "micro_batch_size": 4, "batch_size": 128, "epochs": 3.0, "learning_rate": "3e-4", "lr_scheduler_type": "linear", "lora_rank": 32, "lora_alpha": 64, "lora_dropout": 0.05, "cutoff_len": 256, "dataset": "capybara_finnish_v1.1", "eval_dataset": "None", "format": "alpaca-format-finnish", "eval_steps": 100.0, "raw_text_file": "None", "overlap_len": 128, "newline_favor_len": 128, "higher_rank_limit": false, "warmup_steps": 100.0, "optimizer": "adamw_torch", "hard_cut_string": "\\n\\n\\n", "train_only_after": "", "stop_at_loss": 0, "add_eos_token": false, "min_chars": 0.0, "report_to": "None" } ``` ### Framework versions - PEFT 0.8.2