File size: 1,971 Bytes
a7ff432 e9c917a a7ff432 e9c917a a7ff432 e9c917a a7ff432 e9c917a a7ff432 e9c917a a7ff432 e9c917a a7ff432 e9c917a a7ff432 e9c917a a7ff432 e9c917a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
library_name: peft
base_model: Open-Orca/Mistral-7B-SlimOrca
license: mit
datasets:
- noxneural/kashaloti
language:
- sq
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
Version 1 of 71, qLora finetune of the Mistral-7B model using 1/71 of the GPT4 based part of the Orca Dataset, using approx ~14k records from a total of 1 million records.
### Model Description
- **Developed by:** Marlind Maksuti @ StochastX
- **Model type:** Mistral-7B
- **Language(s) (NLP):** Albanian, Shqip
- **Finetuned from model:** Mistral-7B-SlimOrca
### Model Sources
- **Repository:** https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca
## Uses
Text generation in Albanian.
## Bias, Risks, and Limitations
Model is just version v0.1, outputs are still not optimal.
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0 |