gpt2+morf_s0-30-x-2_cx-en_00000-00009_50k
This model is a fine-tuned version of on the uonlp/CulturaX en dataset. It achieves the following results on the evaluation set:
- Loss: 2.8423
- Accuracy: 0.4330
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
3.6569 | 0.03 | 10000 | 3.5764 | 0.3502 |
3.4317 | 0.06 | 20000 | 3.3581 | 0.3727 |
3.3161 | 0.09 | 30000 | 3.2447 | 0.3848 |
3.2463 | 0.13 | 40000 | 3.1761 | 0.3924 |
3.1897 | 0.16 | 50000 | 3.1277 | 0.3977 |
3.152 | 0.19 | 60000 | 3.0910 | 0.4022 |
3.1341 | 0.22 | 70000 | 3.0575 | 0.4060 |
3.1006 | 0.25 | 80000 | 3.0363 | 0.4084 |
3.0806 | 0.28 | 90000 | 3.0118 | 0.4115 |
3.0555 | 0.31 | 100000 | 2.9919 | 0.4138 |
3.038 | 0.34 | 110000 | 2.9786 | 0.4156 |
3.0291 | 0.38 | 120000 | 2.9651 | 0.4171 |
3.0182 | 0.41 | 130000 | 2.9499 | 0.4191 |
3.0145 | 0.44 | 140000 | 2.9381 | 0.4205 |
2.9891 | 0.47 | 150000 | 2.9272 | 0.4219 |
2.9836 | 0.5 | 160000 | 2.9191 | 0.4230 |
2.9717 | 0.53 | 170000 | 2.9103 | 0.4241 |
2.9651 | 0.56 | 180000 | 2.9039 | 0.4250 |
2.9615 | 0.59 | 190000 | 2.8971 | 0.4258 |
2.9556 | 0.63 | 200000 | 2.8882 | 0.4269 |
2.9452 | 0.66 | 210000 | 2.8825 | 0.4277 |
2.9412 | 0.69 | 220000 | 2.8766 | 0.4284 |
2.9402 | 0.72 | 230000 | 2.8722 | 0.4290 |
2.9299 | 0.75 | 240000 | 2.8675 | 0.4296 |
2.9302 | 0.78 | 250000 | 2.8623 | 0.4304 |
2.9165 | 0.81 | 260000 | 2.8585 | 0.4308 |
2.915 | 0.84 | 270000 | 2.8537 | 0.4314 |
2.92 | 0.88 | 280000 | 2.8506 | 0.4319 |
2.9186 | 0.91 | 290000 | 2.8484 | 0.4321 |
2.9084 | 0.94 | 300000 | 2.8458 | 0.4325 |
2.9142 | 0.97 | 310000 | 2.8438 | 0.4327 |
Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.