Edit model card

flan-t5-small-20-epochs-fine-tune

This model is a fine-tuned version of google/flan-t5-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1136
  • Accuracy: 1
  • F1 Micro: 1
  • F1 Macro: 1
  • F1 Weighted: 1

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Micro F1 Macro F1 Weighted
2.0193 1.0 643 1.3910 1 1 1 1
1.3609 2.0 1286 0.9353 1 1 1 1
1.0198 3.0 1929 0.8408 1 1 1 1
0.7264 4.0 2572 0.7945 1 1 1 1
0.6595 5.0 3215 0.7847 1 1 1 1
0.5689 6.0 3858 0.7898 1 1 1 1
0.4745 7.0 4501 0.7969 1 1 1 1
0.4204 8.0 5144 0.8213 1 1 1 1
0.403 9.0 5787 0.8726 1 1 1 1
0.3578 10.0 6430 0.8777 1 1 1 1
0.3238 11.0 7073 0.9143 1 1 1 1
0.287 12.0 7716 0.9656 1 1 1 1
0.2903 13.0 8359 0.9580 1 1 1 1
0.2463 14.0 9002 1.0306 1 1 1 1
0.2318 15.0 9645 1.0428 1 1 1 1
0.2265 16.0 10288 1.0483 1 1 1 1
0.1954 17.0 10931 1.0825 1 1 1 1
0.191 18.0 11574 1.0972 1 1 1 1
0.1774 19.0 12217 1.1163 1 1 1 1
0.1847 20.0 12860 1.1136 1 1 1 1

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.2
  • Datasets 2.1.0
  • Tokenizers 0.15.2
Downloads last month
10
Safetensors
Model size
60.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for xshubhamx/flan-t5-small-20-epochs-fine-tune

Finetuned
(299)
this model