Edit model card

flan-t5-base-hai

This model is a fine-tuned version of google/flan-t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6881
  • Rouge1: 39.7841
  • Rouge2: 29.2031
  • Rougel: 36.6883
  • Rougelsum: 37.533
  • Gen Len: 17.5106

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 118 0.9586 35.7723 22.9975 32.3447 33.2069 17.4894
No log 2.0 236 0.8239 36.2962 24.2274 33.1222 33.8173 17.5447
No log 3.0 354 0.7414 38.4245 27.3598 35.4793 36.3822 17.6596
No log 4.0 472 0.6988 39.386 28.7308 36.4217 37.2752 17.5277
0.8817 5.0 590 0.6881 39.7841 29.2031 36.6883 37.533 17.5106

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
1
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.