metadata
license: mit
base_model: facebook/mbart-large-50
tags:
- generated_from_trainer
metrics:
- rouge
- sacrebleu
model-index:
- name: mBART-TextSimp-LT-BatchSize8-lr5e-5
results: []
mBART-TextSimp-LT-BatchSize8-lr5e-5
This model is a fine-tuned version of facebook/mbart-large-50 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4296
- Rouge1: 0.0605
- Rouge2: 0.0078
- Rougel: 0.0593
- Sacrebleu: 0.044
- Gen Len: 34.5776
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Sacrebleu | Gen Len |
---|---|---|---|---|---|---|---|---|
8.0008 | 1.0 | 104 | 7.0565 | 0.1958 | 0.1282 | 0.1868 | 7.9463 | 511.6945 |
0.3454 | 2.0 | 209 | 0.1874 | 0.6646 | 0.4862 | 0.6559 | 41.0808 | 34.5752 |
0.0728 | 3.0 | 313 | 0.0748 | 0.7063 | 0.5426 | 0.6984 | 48.033 | 34.5752 |
0.0491 | 4.0 | 418 | 0.0630 | 0.7346 | 0.5861 | 0.7248 | 51.6574 | 34.5752 |
0.755 | 5.0 | 522 | 0.7158 | 0.0008 | 0.0 | 0.0009 | 0.0 | 35.5752 |
0.4913 | 6.0 | 627 | 0.4653 | 0.0218 | 0.0008 | 0.0219 | 0.022 | 34.6134 |
0.4771 | 7.0 | 731 | 0.4525 | 0.0385 | 0.0034 | 0.0382 | 0.0308 | 34.926 |
0.4224 | 7.96 | 832 | 0.4296 | 0.0605 | 0.0078 | 0.0593 | 0.044 | 34.5776 |
Framework versions
- Transformers 4.33.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.4
- Tokenizers 0.13.3