phi-1_5-finetuned-qlora-cluster-gsm8k-v2
This model is a fine-tuned version of microsoft/phi-1_5 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.9003
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 25
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.0468 | 1.0 | 233 | 1.1314 |
0.9635 | 2.0 | 467 | 1.1069 |
0.9293 | 3.0 | 701 | 1.1129 |
0.8905 | 4.0 | 935 | 1.1269 |
0.8478 | 5.0 | 1168 | 1.1509 |
0.7686 | 6.0 | 1402 | 1.1727 |
0.7125 | 7.0 | 1636 | 1.2254 |
0.6637 | 8.0 | 1870 | 1.2571 |
0.6155 | 9.0 | 2103 | 1.3230 |
0.574 | 10.0 | 2337 | 1.3985 |
0.5273 | 11.0 | 2571 | 1.4532 |
0.451 | 12.0 | 2805 | 1.5160 |
0.4102 | 13.0 | 3038 | 1.5888 |
0.3802 | 14.0 | 3272 | 1.6469 |
0.3586 | 15.0 | 3506 | 1.6916 |
0.3391 | 16.0 | 3740 | 1.7576 |
0.3194 | 17.0 | 3973 | 1.7898 |
0.293 | 18.0 | 4207 | 1.8284 |
0.2815 | 19.0 | 4441 | 1.8460 |
0.2739 | 20.0 | 4675 | 1.8681 |
0.2693 | 21.0 | 4908 | 1.8821 |
0.2646 | 22.0 | 5142 | 1.8908 |
0.2614 | 23.0 | 5376 | 1.8954 |
0.2577 | 24.0 | 5610 | 1.8993 |
0.2566 | 24.92 | 5825 | 1.9003 |
Framework versions
- PEFT 0.11.1
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
- Downloads last month
- 0
Model tree for nikhilajjarapu/phi-1_5-finetuned-qlora-cluster-gsm8k-v2
Base model
microsoft/phi-1_5