Edit model card

bart-large-cnn-samsum-icsi-ami-v2

This model is a fine-tuned version of philschmid/bart-large-cnn-samsum on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 5.8987
  • Rouge1: 39.0928
  • Rouge2: 10.8408
  • Rougel: 21.9138
  • Rougelsum: 35.5067
  • Gen Len: 138.7941

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 135 3.2308 38.4274 13.6461 22.366 35.2353 185.7941
No log 2.0 270 3.3026 40.1748 11.7944 23.2655 36.4718 146.8529
No log 3.0 405 3.5199 39.8209 12.1621 22.7772 36.4967 141.7647
2.2131 4.0 540 4.0508 40.4325 11.6547 22.9958 36.8782 131.4412
2.2131 5.0 675 4.6988 38.4097 9.8309 20.3894 34.1967 145.9706
2.2131 6.0 810 4.9590 38.5758 9.6335 20.865 35.0321 169.2353
2.2131 7.0 945 5.4264 38.2813 9.5764 21.1406 34.5989 148.0294
0.401 8.0 1080 5.4887 38.3014 9.6881 21.2398 34.1584 139.3529
0.401 9.0 1215 5.8044 39.9603 10.4329 22.6895 36.2406 145.2353
0.401 10.0 1350 5.8987 39.0928 10.8408 21.9138 35.5067 138.7941

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu117
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using vmarklynn/bart-large-cnn-samsum-icsi-ami-v2 2