Edit model card

bart-large-cnn-samsum-icsi-ami-v3

This model is a fine-tuned version of philschmid/bart-large-cnn-samsum on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2328
  • Rouge1: 39.9389
  • Rouge2: 12.2256
  • Rougel: 23.4739
  • Rougelsum: 36.7757
  • Gen Len: 155.3529

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 135 3.2655 38.2358 11.9547 23.1235 35.1978 163.8824
No log 2.0 270 3.2328 39.9389 12.2256 23.4739 36.7757 155.3529
No log 3.0 405 3.4852 40.8728 11.6505 22.8143 37.3518 144.7647
2.1798 4.0 540 4.0977 40.3307 10.4437 22.6059 36.2414 135.1471
2.1798 5.0 675 4.6537 41.2432 10.7188 22.7868 37.3837 136.1471

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu117
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.