Edit model card

results

This model is a fine-tuned version of google/flan-t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.2762

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 6

Training results

Training Loss Epoch Step Validation Loss
2.4645 0.3559 100 2.3210
2.4822 0.7117 200 2.3128
2.4995 1.0676 300 2.3071
2.6056 1.4235 400 2.3002
2.4679 1.7794 500 2.2958
2.5161 2.1352 600 2.2915
2.4266 2.4911 700 2.2890
2.5262 2.8470 800 2.2863
2.503 3.2028 900 2.2839
2.4655 3.5587 1000 2.2821
2.4776 3.9146 1100 2.2803
2.5122 4.2705 1200 2.2788
2.4529 4.6263 1300 2.2777
2.4557 4.9822 1400 2.2768
2.4035 5.3381 1500 2.2763
2.3816 5.6940 1600 2.2762

Framework versions

  • Transformers 4.46.3
  • Pytorch 2.5.1+cu121
  • Datasets 2.16.0
  • Tokenizers 0.20.3
Downloads last month
0
Safetensors
Model size
248M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for priyankrathore/results

Finetuned
(640)
this model