Edit model card

bart-large-cnn-finetuned-prompt_generation

This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.6474
  • Actual score: 0.8766
  • Predction score: 0.3367
  • Score difference: 0.5399

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-07
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Actual score Predction score Score difference
No log 1.0 15 3.6226 0.8766 -0.4072 1.2838
No log 2.0 30 3.5120 0.8766 -0.2477 1.1243
No log 3.0 45 3.3572 0.8766 -0.3233 1.1999
No log 4.0 60 3.2592 0.8766 -0.0494 0.9260
No log 5.0 75 3.1430 0.8766 -0.3234 1.2000
No log 6.0 90 3.0581 0.8766 -0.4732 1.3498
No log 7.0 105 2.9988 0.8766 -0.5715 1.4481
No log 8.0 120 2.9564 0.8766 -0.6699 1.5465
No log 9.0 135 2.9242 0.8766 -0.5505 1.4271
No log 10.0 150 2.8969 0.8766 -0.4393 1.3159
No log 11.0 165 2.8729 0.8766 -0.4882 1.3648
No log 12.0 180 2.8503 0.8766 -0.6554 1.5320
No log 13.0 195 2.8308 0.8766 -0.7288 1.6054
No log 14.0 210 2.8128 0.8766 -0.7016 1.5783
No log 15.0 225 2.7972 0.8766 -0.7900 1.6666
No log 16.0 240 2.7832 0.8766 -0.6285 1.5052
No log 17.0 255 2.7708 0.8766 -0.5613 1.4379
No log 18.0 270 2.7591 0.8766 -0.6125 1.4891
No log 19.0 285 2.7481 0.8766 -0.5101 1.3868
No log 20.0 300 2.7390 0.8766 -0.4879 1.3646
No log 21.0 315 2.7307 0.8766 -0.4345 1.3112
No log 22.0 330 2.7229 0.8766 -0.3278 1.2044
No log 23.0 345 2.7156 0.8766 -0.3324 1.2090
No log 24.0 360 2.7084 0.8766 -0.2899 1.1665
No log 25.0 375 2.7019 0.8766 -0.1728 1.0494
No log 26.0 390 2.6965 0.8766 -0.2785 1.1552
No log 27.0 405 2.6918 0.8766 -0.1926 1.0692
No log 28.0 420 2.6872 0.8766 -0.1204 0.9970
No log 29.0 435 2.6832 0.8766 -0.0040 0.8806
No log 30.0 450 2.6791 0.8766 -0.0742 0.9508
No log 31.0 465 2.6751 0.8766 0.0669 0.8097
No log 32.0 480 2.6719 0.8766 -0.0049 0.8815
No log 33.0 495 2.6690 0.8766 -0.0196 0.8962
2.6809 34.0 510 2.6663 0.8766 0.0692 0.8074
2.6809 35.0 525 2.6636 0.8766 0.0843 0.7923
2.6809 36.0 540 2.6615 0.8766 -0.0330 0.9096
2.6809 37.0 555 2.6594 0.8766 -0.0065 0.8831
2.6809 38.0 570 2.6575 0.8766 0.2102 0.6664
2.6809 39.0 585 2.6559 0.8766 0.3005 0.5761
2.6809 40.0 600 2.6541 0.8766 0.3360 0.5406
2.6809 41.0 615 2.6528 0.8766 0.2456 0.6310
2.6809 42.0 630 2.6517 0.8766 0.3399 0.5367
2.6809 43.0 645 2.6509 0.8766 0.4224 0.4542
2.6809 44.0 660 2.6499 0.8766 0.4277 0.4490
2.6809 45.0 675 2.6492 0.8766 0.2815 0.5951
2.6809 46.0 690 2.6485 0.8766 0.3053 0.5714
2.6809 47.0 705 2.6481 0.8766 0.2149 0.6618
2.6809 48.0 720 2.6478 0.8766 0.2285 0.6481
2.6809 49.0 735 2.6475 0.8766 0.2546 0.6220
2.6809 50.0 750 2.6474 0.8766 0.3367 0.5399

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
4
Safetensors
Model size
406M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for satyanshu404/bart-large-cnn-finetuned-prompt_generation

Finetuned
(252)
this model