imdatta0's picture
End of training
ad26cac verified
metadata
license: apache-2.0
library_name: peft
tags:
  - unsloth
  - generated_from_trainer
base_model: mistralai/Mistral-7B-v0.3
model-index:
  - name: mistral_7b_v_MetaMathQA_40K_reverse
    results: []

mistral_7b_v_MetaMathQA_40K_reverse

This model is a fine-tuned version of mistralai/Mistral-7B-v0.3 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4730

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 0.02
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
0.8354 0.0211 13 0.8810
1.4577 0.0421 26 1.4281
1.0366 0.0632 39 0.9662
0.9024 0.0842 52 0.7634
0.694 0.1053 65 0.7062
0.665 0.1264 78 0.6924
0.6381 0.1474 91 0.6665
0.6481 0.1685 104 0.6725
0.6394 0.1896 117 0.6697
0.6486 0.2106 130 0.6728
0.6381 0.2317 143 0.6631
0.619 0.2527 156 0.6470
0.6245 0.2738 169 0.6530
0.6233 0.2949 182 0.6445
0.6225 0.3159 195 0.6372
0.6105 0.3370 208 0.6283
0.5865 0.3580 221 0.6180
0.5913 0.3791 234 0.6104
0.5769 0.4002 247 0.6011
0.586 0.4212 260 0.6021
0.5945 0.4423 273 0.5921
0.57 0.4633 286 0.5869
0.5636 0.4844 299 0.5772
0.5563 0.5055 312 0.5713
0.5516 0.5265 325 0.5655
0.5505 0.5476 338 0.5615
0.5421 0.5687 351 0.5520
0.5225 0.5897 364 0.5431
0.5207 0.6108 377 0.5374
0.5163 0.6318 390 0.5351
0.5169 0.6529 403 0.5262
0.5023 0.6740 416 0.5203
0.483 0.6950 429 0.5153
0.4999 0.7161 442 0.5074
0.487 0.7371 455 0.5027
0.4971 0.7582 468 0.4985
0.4875 0.7793 481 0.4937
0.4881 0.8003 494 0.4904
0.4753 0.8214 507 0.4869
0.4609 0.8424 520 0.4825
0.4657 0.8635 533 0.4794
0.4563 0.8846 546 0.4776
0.4738 0.9056 559 0.4751
0.4685 0.9267 572 0.4743
0.4539 0.9478 585 0.4735
0.4606 0.9688 598 0.4731
0.457 0.9899 611 0.4730

Framework versions

  • PEFT 0.7.1
  • Transformers 4.40.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1