Edit model card

bart-large-cnn-prompt_generation

This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.6454
  • Rouge1: 40.6908
  • Rouge2: 16.1706
  • Rougel: 25.6927
  • Rougelsum: 25.6588
  • Gen Len: 77.2

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-07
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 15 3.6562 25.0903 5.3158 16.4265 16.3853 67.42
No log 2.0 30 3.5539 24.9011 4.9854 16.5812 16.5697 65.28
No log 3.0 45 3.3930 24.9983 5.2373 17.0342 16.993 65.8
No log 4.0 60 3.2928 24.8418 4.7159 16.929 16.907 66.0
No log 5.0 75 3.1723 26.012 5.5696 17.4002 17.4621 66.84
No log 6.0 90 3.0813 26.9443 5.8262 17.8297 17.8673 67.52
No log 7.0 105 3.0169 27.7155 6.4297 18.4479 18.4913 66.78
No log 8.0 120 2.9700 27.2858 6.5437 18.5185 18.4731 67.78
No log 9.0 135 2.9340 28.0747 7.3049 18.7045 18.718 67.34
No log 10.0 150 2.9044 28.4417 7.34 18.7805 18.8377 66.44
No log 11.0 165 2.8795 28.8704 7.4119 18.7748 18.849 67.02
No log 12.0 180 2.8558 28.5338 7.1929 18.7993 18.859 67.02
No log 13.0 195 2.8351 30.3984 8.3546 19.8864 19.918 68.18
No log 14.0 210 2.8170 30.934 8.8637 20.6051 20.6574 67.74
No log 15.0 225 2.8016 33.611 10.3334 22.0692 22.11 67.94
No log 16.0 240 2.7867 34.4518 11.2186 22.5517 22.5979 67.36
No log 17.0 255 2.7737 33.8745 10.9904 22.0985 22.1333 68.98
No log 18.0 270 2.7617 35.1795 11.6458 22.3628 22.3954 68.1
No log 19.0 285 2.7502 35.3137 11.7688 22.7397 22.7986 67.24
No log 20.0 300 2.7402 35.8673 12.3602 23.4671 23.481 67.32
No log 21.0 315 2.7312 37.2112 13.6711 24.0348 24.0426 68.58
No log 22.0 330 2.7228 37.521 14.1801 24.1826 24.2038 68.46
No log 23.0 345 2.7148 37.4877 13.7803 24.2369 24.189 70.18
No log 24.0 360 2.7074 38.2158 14.3195 24.4562 24.4262 69.56
No log 25.0 375 2.7012 38.0379 14.2362 24.5273 24.4723 70.7
No log 26.0 390 2.6955 37.4245 13.8152 24.4203 24.4188 69.52
No log 27.0 405 2.6905 37.4296 13.4741 24.569 24.5797 70.7
No log 28.0 420 2.6859 38.7617 14.3506 25.0565 25.0256 71.56
No log 29.0 435 2.6815 39.3441 15.2271 25.4611 25.4251 73.38
No log 30.0 450 2.6774 38.6753 14.4202 24.7802 24.8057 72.94
No log 31.0 465 2.6732 39.7278 15.0554 25.4741 25.4578 74.02
No log 32.0 480 2.6697 39.9498 15.0412 25.4949 25.5039 74.2
No log 33.0 495 2.6668 40.0256 15.1986 25.4401 25.436 75.14
2.6871 34.0 510 2.6638 39.8616 15.249 25.4639 25.4979 75.54
2.6871 35.0 525 2.6613 39.9678 15.1735 25.7189 25.719 75.8
2.6871 36.0 540 2.6593 40.3261 15.4175 25.6158 25.6426 75.0
2.6871 37.0 555 2.6572 40.6307 15.3666 25.6045 25.6245 76.06
2.6871 38.0 570 2.6551 41.2257 15.55 26.0762 26.0547 75.74
2.6871 39.0 585 2.6535 41.2164 15.981 26.068 26.0566 76.16
2.6871 40.0 600 2.6520 41.3161 15.9648 26.0276 26.0199 76.14
2.6871 41.0 615 2.6508 41.1103 15.7775 25.2761 25.237 77.28
2.6871 42.0 630 2.6496 41.4765 16.2494 26.021 26.0026 76.68
2.6871 43.0 645 2.6488 41.725 16.3547 26.1039 26.067 75.88
2.6871 44.0 660 2.6478 41.3649 16.3576 26.0133 25.9943 76.08
2.6871 45.0 675 2.6472 41.1901 16.4955 26.0594 26.0468 76.34
2.6871 46.0 690 2.6466 41.0942 16.2436 25.8578 25.853 75.92
2.6871 47.0 705 2.6461 40.6232 16.1631 25.6709 25.6473 76.46
2.6871 48.0 720 2.6458 41.1453 16.3914 25.946 25.9199 76.1
2.6871 49.0 735 2.6455 41.0364 16.3432 25.8202 25.7964 76.18
2.6871 50.0 750 2.6454 40.6908 16.1706 25.6927 25.6588 77.2

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
6
Safetensors
Model size
406M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for satyanshu404/bart-large-cnn-prompt_generation

Finetuned
(301)
this model