phdreg's picture
Model save
e7f439f verified
metadata
tags:
  - generated_from_trainer
metrics:
  - rouge
model-index:
  - name: t5-base-finetuned-feedback
    results: []

t5-base-finetuned-feedback

This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2738
  • Rouge1: 55.9578
  • Rouge2: 31.3401
  • Rougel: 52.9556
  • Rougelsum: 53.1034
  • Gen Len: 10.2562

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 61 1.4341 53.0065 28.3454 50.4489 50.5377 9.3388
No log 2.0 122 1.3604 53.2275 28.6424 50.6585 50.7617 9.8182
No log 3.0 183 1.3207 52.7581 28.6272 49.6977 49.7928 10.0661
No log 4.0 244 1.3098 53.5227 28.6578 50.2637 50.2897 9.9752
No log 5.0 305 1.2898 54.4587 29.8825 51.3522 51.4744 9.876
No log 6.0 366 1.2781 54.046 29.7089 51.3241 51.4283 10.1818
No log 7.0 427 1.2771 55.1788 30.8745 52.3598 52.4871 10.2149
No log 8.0 488 1.2762 55.6258 30.9444 52.5715 52.6889 10.2397
1.2952 9.0 549 1.2746 55.759 30.918 52.8427 52.8878 10.1818
1.2952 10.0 610 1.2738 55.9578 31.3401 52.9556 53.1034 10.2562

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1