my_awesome_arith_model
This model is a fine-tuned version of facebook/bart-large on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.3147
- Rouge1: 0.9917
- Rouge2: 0.9123
- Rougel: 0.9594
- Rougelsum: 0.9593
- Gen Len: 8.9019
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
No log | 1.0 | 31 | 0.5797 | 0.977 | 0.7972 | 0.9038 | 0.9035 | 8.6517 |
No log | 2.0 | 62 | 0.6579 | 0.9792 | 0.8002 | 0.9055 | 0.9055 | 10.6333 |
No log | 3.0 | 93 | 0.4883 | 0.982 | 0.8346 | 0.9181 | 0.9182 | 8.7263 |
No log | 4.0 | 124 | 0.4298 | 0.9813 | 0.8255 | 0.9164 | 0.9163 | 8.8856 |
No log | 5.0 | 155 | 0.4968 | 0.9849 | 0.8408 | 0.9228 | 0.9225 | 10.5873 |
No log | 6.0 | 186 | 0.4186 | 0.9805 | 0.8441 | 0.9253 | 0.9253 | 10.1277 |
No log | 7.0 | 217 | 0.3675 | 0.9837 | 0.8633 | 0.9371 | 0.9373 | 9.2308 |
No log | 8.0 | 248 | 0.3372 | 0.9909 | 0.9004 | 0.955 | 0.9546 | 9.4566 |
No log | 9.0 | 279 | 0.3526 | 0.991 | 0.8999 | 0.953 | 0.9529 | 8.7896 |
No log | 10.0 | 310 | 0.3147 | 0.9917 | 0.9123 | 0.9594 | 0.9593 | 8.9019 |
Framework versions
- Transformers 4.34.1
- Pytorch 2.0.0+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
- Downloads last month
- 103
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Puspaksahu/my_awesome_arith_model
Base model
facebook/bart-large