Edit model card

flan-t5-small-qa-9

This model is a fine-tuned version of badokorach/flan-t5-small-qa on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0989

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 305 0.0747
0.061 2.0 610 0.0791
0.061 3.0 915 0.0798
0.052 4.0 1220 0.0845
0.0481 5.0 1525 0.0807
0.0481 6.0 1830 0.0837
0.0443 7.0 2135 0.0888
0.0443 8.0 2440 0.0890
0.0413 9.0 2745 0.0869
0.0381 10.0 3050 0.0905
0.0381 11.0 3355 0.0903
0.0356 12.0 3660 0.0900
0.0356 13.0 3965 0.0915
0.0341 14.0 4270 0.0937
0.0325 15.0 4575 0.0949
0.0325 16.0 4880 0.0943
0.0306 17.0 5185 0.0953
0.0306 18.0 5490 0.0948
0.0301 19.0 5795 0.0966
0.0288 20.0 6100 0.0969
0.0288 21.0 6405 0.0976
0.0279 22.0 6710 0.0987
0.0275 23.0 7015 0.0984
0.0275 24.0 7320 0.0975
0.027 25.0 7625 0.0979
0.027 26.0 7930 0.0984
0.0261 27.0 8235 0.0991
0.026 28.0 8540 0.0992
0.026 29.0 8845 0.0990
0.0259 30.0 9150 0.0989

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.0.1+cu118
  • Tokenizers 0.13.3
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for badokorach/flan-t5-small-qa-9

Finetuned
(2)
this model
Finetunes
1 model

Space using badokorach/flan-t5-small-qa-9 1