peft-dialogue-summary-training-1725334551
This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.3178
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 1000
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.6628 | 0.0500 | 25 | 1.3937 |
1.1896 | 0.1001 | 50 | 1.3845 |
1.4489 | 0.1501 | 75 | 1.3542 |
1.2057 | 0.2001 | 100 | 1.3646 |
1.4379 | 0.2501 | 125 | 1.3454 |
1.1393 | 0.3002 | 150 | 1.3615 |
1.4022 | 0.3502 | 175 | 1.3418 |
1.1474 | 0.4002 | 200 | 1.3431 |
1.4453 | 0.4502 | 225 | 1.3346 |
1.2244 | 0.5003 | 250 | 1.3391 |
1.4558 | 0.5503 | 275 | 1.3339 |
1.165 | 0.6003 | 300 | 1.3358 |
1.4326 | 0.6503 | 325 | 1.3316 |
1.2013 | 0.7004 | 350 | 1.3313 |
1.3984 | 0.7504 | 375 | 1.3290 |
1.1834 | 0.8004 | 400 | 1.3331 |
1.4393 | 0.8504 | 425 | 1.3267 |
1.2 | 0.9005 | 450 | 1.3264 |
1.4473 | 0.9505 | 475 | 1.3248 |
1.1894 | 1.0005 | 500 | 1.3273 |
1.4079 | 1.0505 | 525 | 1.3240 |
1.1456 | 1.1006 | 550 | 1.3255 |
1.3468 | 1.1506 | 575 | 1.3226 |
1.1875 | 1.2006 | 600 | 1.3234 |
1.3708 | 1.2506 | 625 | 1.3217 |
1.1189 | 1.3007 | 650 | 1.3224 |
1.3754 | 1.3507 | 675 | 1.3209 |
1.1287 | 1.4007 | 700 | 1.3229 |
1.4204 | 1.4507 | 725 | 1.3207 |
1.1606 | 1.5008 | 750 | 1.3197 |
1.4357 | 1.5508 | 775 | 1.3192 |
1.1375 | 1.6008 | 800 | 1.3197 |
1.3994 | 1.6508 | 825 | 1.3191 |
1.13 | 1.7009 | 850 | 1.3199 |
1.4531 | 1.7509 | 875 | 1.3185 |
1.1453 | 1.8009 | 900 | 1.3182 |
1.3651 | 1.8509 | 925 | 1.3178 |
1.1463 | 1.9010 | 950 | 1.3180 |
1.3414 | 1.9510 | 975 | 1.3178 |
1.1132 | 2.0010 | 1000 | 1.3178 |
Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 11
Model tree for mahlawat/peft-dialogue-summary-training-1725334551
Base model
microsoft/phi-2