lyrics-main
This model is a fine-tuned version of gpt2-medium on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 3.0601
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
3.7651 | 0.1550 | 50 | 3.4887 |
3.6031 | 0.3101 | 100 | 3.3498 |
3.4463 | 0.4651 | 150 | 3.2707 |
3.4016 | 0.6202 | 200 | 3.2325 |
3.3733 | 0.7752 | 250 | 3.2004 |
3.3827 | 0.9302 | 300 | 3.1764 |
3.1805 | 1.0853 | 350 | 3.1566 |
3.1353 | 1.2403 | 400 | 3.1435 |
3.1613 | 1.3953 | 450 | 3.1296 |
3.2172 | 1.5504 | 500 | 3.1210 |
3.1419 | 1.7054 | 550 | 3.1074 |
3.0979 | 1.8605 | 600 | 3.0954 |
3.1357 | 2.0155 | 650 | 3.0860 |
2.9513 | 2.1705 | 700 | 3.0827 |
2.9206 | 2.3256 | 750 | 3.0738 |
2.9016 | 2.4806 | 800 | 3.0678 |
2.9043 | 2.6357 | 850 | 3.0620 |
3.0086 | 2.7907 | 900 | 3.0606 |
2.9851 | 2.9457 | 950 | 3.0601 |
Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Tokenizers 0.20.3
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.