Edit model card

flan-t5-xl-summarization-epoch20

This model is a fine-tuned version of google/flan-t5-xl on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5008
  • Rouge1: 48.3084
  • Rouge2: 27.2658
  • Rougel: 37.9769
  • Rougelsum: 41.5848
  • Gen Len: 52.1176

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 40 0.9558 33.7961 16.1287 27.1659 28.0049 27.5294
No log 2.0 80 0.7329 41.1727 26.7202 35.6927 37.9856 62.6471
No log 3.0 120 0.5996 39.7001 21.6984 29.3765 34.738 82.0588
No log 4.0 160 0.5612 41.4021 23.9875 32.7841 36.4756 67.5294
No log 5.0 200 0.5494 42.9379 24.0227 33.2609 37.7189 67.5882
No log 6.0 240 0.5344 44.3145 24.7379 34.5022 38.7382 58.1176
No log 7.0 280 0.5264 48.3821 28.1406 36.8146 40.9602 54.1765
No log 8.0 320 0.5193 48.5669 28.7554 37.2762 41.4076 55.8235
No log 9.0 360 0.5129 48.4222 25.9534 35.4387 40.3668 57.7059
No log 10.0 400 0.5109 48.1639 27.399 37.7239 40.9771 51.0588
No log 11.0 440 0.5093 50.4094 29.8618 39.3303 42.7215 53.0
No log 12.0 480 0.5060 50.3864 27.8568 37.5365 42.3323 53.3529
0.8091 13.0 520 0.5073 48.0328 26.5537 36.7542 41.2961 55.1765
0.8091 14.0 560 0.5049 47.2298 26.6774 36.8165 40.5404 52.5294
0.8091 15.0 600 0.5008 48.3084 27.2658 37.9769 41.5848 52.1176
0.8091 16.0 640 0.5017 47.6969 27.0742 37.3415 41.0155 54.9412
0.8091 17.0 680 0.5022 48.3553 27.5197 38.2598 41.5044 54.0588
0.8091 18.0 720 0.5018 48.474 27.5343 37.7907 41.5528 56.0
0.8091 19.0 760 0.5010 48.474 27.5343 37.7907 41.5528 56.0
0.8091 20.0 800 0.5009 48.474 27.5343 37.7907 41.5528 56.0

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
5
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Shraddhabhoir/flan-t5-xl-summarization-epoch20

Base model

google/flan-t5-xl
Adapter
(17)
this model