Edit model card

distilbart-podimo-data-eval-2

This model is a fine-tuned version of sshleifer/distilbart-cnn-12-6 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.5823
  • Rouge1: 34.3971
  • Rouge2: 7.95
  • Rougel: 18.7271
  • Rougelsum: 30.9024
  • Gen Len: 131.919

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
4.1512 0.98 44 3.7806 32.727 6.5788 17.5196 29.3777 137.2905
3.6342 1.98 88 3.6421 32.709 6.7877 17.8668 29.4636 134.6648
3.3512 2.98 132 3.5819 33.5128 7.519 18.6614 30.1142 132.2961
3.141 3.98 176 3.5552 33.4795 7.3242 18.396 30.0854 132.757
2.9787 4.98 220 3.5583 33.5862 7.391 18.3568 30.2461 132.4078
2.8555 5.98 264 3.5650 34.1111 7.8008 18.7159 30.6055 131.3603
2.7648 6.98 308 3.5729 34.0981 7.6556 18.6373 30.6269 131.2821
2.6645 7.98 352 3.5823 34.3971 7.95 18.7271 30.9024 131.919

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.11.0
  • Datasets 2.2.1
  • Tokenizers 0.12.1
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.