--- base_model: unsloth/mistral-7b-v0.3 library_name: peft license: apache-2.0 tags: - unsloth - generated_from_trainer model-index: - name: Mistral-7B-v0.3_pct_default_r16 results: [] --- # Mistral-7B-v0.3_pct_default_r16 This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0180 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.02 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9646 | 0.0206 | 8 | 2.0348 | | 2.0531 | 0.0413 | 16 | 2.0345 | | 2.1168 | 0.0619 | 24 | 2.0464 | | 2.0712 | 0.0825 | 32 | 2.0462 | | 2.0779 | 0.1032 | 40 | 2.0510 | | 2.0905 | 0.1238 | 48 | 2.0476 | | 2.0624 | 0.1445 | 56 | 2.0475 | | 2.0793 | 0.1651 | 64 | 2.0428 | | 2.0559 | 0.1857 | 72 | 2.0521 | | 2.0597 | 0.2064 | 80 | 2.0680 | | 2.1235 | 0.2270 | 88 | 2.0849 | | 2.14 | 0.2476 | 96 | 2.0772 | | 2.1586 | 0.2683 | 104 | 2.0880 | | 2.0974 | 0.2889 | 112 | 2.0837 | | 2.1577 | 0.3096 | 120 | 2.0838 | | 2.0998 | 0.3302 | 128 | 2.0899 | | 2.1069 | 0.3508 | 136 | 2.0882 | | 2.1621 | 0.3715 | 144 | 2.0846 | | 2.1441 | 0.3921 | 152 | 2.0949 | | 2.1355 | 0.4127 | 160 | 2.0859 | | 2.084 | 0.4334 | 168 | 2.0871 | | 2.1649 | 0.4540 | 176 | 2.0845 | | 2.0651 | 0.4746 | 184 | 2.0719 | | 2.1708 | 0.4953 | 192 | 2.0722 | | 2.1311 | 0.5159 | 200 | 2.0677 | | 2.1038 | 0.5366 | 208 | 2.0627 | | 2.0804 | 0.5572 | 216 | 2.0757 | | 2.0695 | 0.5778 | 224 | 2.0649 | | 2.0961 | 0.5985 | 232 | 2.0643 | | 2.0808 | 0.6191 | 240 | 2.0567 | | 2.1337 | 0.6397 | 248 | 2.0557 | | 2.0565 | 0.6604 | 256 | 2.0555 | | 2.1184 | 0.6810 | 264 | 2.0497 | | 2.0604 | 0.7017 | 272 | 2.0412 | | 2.1099 | 0.7223 | 280 | 2.0384 | | 2.1048 | 0.7429 | 288 | 2.0415 | | 2.0692 | 0.7636 | 296 | 2.0340 | | 2.0489 | 0.7842 | 304 | 2.0331 | | 2.057 | 0.8048 | 312 | 2.0275 | | 2.0485 | 0.8255 | 320 | 2.0224 | | 2.0364 | 0.8461 | 328 | 2.0202 | | 2.014 | 0.8667 | 336 | 2.0240 | | 2.0656 | 0.8874 | 344 | 2.0236 | | 2.0473 | 0.9080 | 352 | 2.0197 | | 2.0279 | 0.9287 | 360 | 2.0180 | | 2.0415 | 0.9493 | 368 | 2.0178 | | 2.0419 | 0.9699 | 376 | 2.0178 | | 2.0597 | 0.9906 | 384 | 2.0180 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.0 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1