bert2bert-model6-last
This model is a fine-tuned version of on the id_liputan6 dataset. It achieves the following results on the evaluation set:
- Loss: 7.6361
- R1 Precision: 0.1719
- R1 Recall: 0.0279
- R1 Fmeasure: 0.0432
- R2 Precision: 0.0
- R2 Recall: 0.0
- R2 Fmeasure: 0.0
- Rl Precision: 0.1719
- Rl Recall: 0.0274
- Rl Fmeasure: 0.0428
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | R1 Precision | R1 Recall | R1 Fmeasure | R2 Precision | R2 Recall | R2 Fmeasure | Rl Precision | Rl Recall | Rl Fmeasure |
---|---|---|---|---|---|---|---|---|---|---|---|---|
10.0458 | 1.0 | 4 | 8.3786 | 0.0612 | 0.0567 | 0.0584 | 0.0 | 0.0 | 0.0 | 0.0512 | 0.0468 | 0.0485 |
7.6302 | 2.0 | 8 | 8.0384 | 0.087 | 0.1202 | 0.1005 | 0.0 | 0.0 | 0.0 | 0.0583 | 0.08 | 0.0669 |
7.2136 | 3.0 | 12 | 7.7980 | 0.0598 | 0.0775 | 0.0677 | 0.0057 | 0.0081 | 0.0067 | 0.0516 | 0.067 | 0.0583 |
6.8639 | 4.0 | 16 | 7.8075 | 0.0938 | 0.0107 | 0.0192 | 0.0 | 0.0 | 0.0 | 0.0938 | 0.0105 | 0.0188 |
6.3433 | 5.0 | 20 | 7.7948 | 0.0406 | 0.0107 | 0.0168 | 0.0 | 0.0 | 0.0 | 0.0406 | 0.0105 | 0.0166 |
6.0891 | 6.0 | 24 | 7.7148 | 0.0469 | 0.0107 | 0.0162 | 0.0 | 0.0 | 0.0 | 0.0469 | 0.0105 | 0.015 |
6.0284 | 7.0 | 28 | 7.6611 | 0.1406 | 0.0179 | 0.0256 | 0.0 | 0.0 | 0.0 | 0.1406 | 0.0139 | 0.0219 |
5.7972 | 8.0 | 32 | 7.6732 | 0.0646 | 0.025 | 0.0332 | 0.0 | 0.0 | 0.0 | 0.0608 | 0.021 | 0.0293 |
5.6802 | 9.0 | 36 | 7.6398 | 0.1823 | 0.0279 | 0.0443 | 0.0 | 0.0 | 0.0 | 0.1719 | 0.0241 | 0.0396 |
5.4635 | 10.0 | 40 | 7.6361 | 0.1719 | 0.0279 | 0.0432 | 0.0 | 0.0 | 0.0 | 0.1719 | 0.0274 | 0.0428 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
- Downloads last month
- 1
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.