Edit model card

test-dialogue-summarization

This model is a fine-tuned version of facebook/bart-large-cnn on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9653
  • Rouge1: 61.2091
  • Rouge2: 36.8979
  • Rougel: 46.3962
  • Rougelsum: 58.3082
  • Gen Len: 135.6733

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 94 1.3755 53.9112 25.5975 36.8507 50.0306 132.7733
No log 2.0 188 1.2081 55.5956 27.4849 37.7785 51.7906 137.1267
No log 3.0 282 1.1149 55.714 28.3629 39.0763 52.439 137.62
No log 4.0 376 1.0564 56.6202 29.789 39.9223 53.3054 135.1733
No log 5.0 470 1.0107 57.8272 31.5716 41.9775 54.5114 135.1733
1.1609 6.0 564 0.9775 58.561 32.5462 42.9577 55.1653 133.5533
1.1609 7.0 658 0.9683 59.0592 33.8153 43.918 56.0493 135.3267
1.1609 8.0 752 0.9626 60.4587 35.8511 45.9511 57.3658 134.38
1.1609 9.0 846 0.9623 60.3938 35.8996 45.7161 57.2104 135.2333
1.1609 10.0 940 0.9653 61.2091 36.8979 46.3962 58.3082 135.6733

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.2
  • Tokenizers 0.13.3
Downloads last month
35
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Charankumarpc/test-dialogue-summarization

Finetuned
(301)
this model