Edit model card

fine-tuned-DatasetQAS-Squad-ID-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05

This model is a fine-tuned version of indolem/indobert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5061
  • Exact Match: 48.9695
  • F1: 65.3139

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Exact Match F1
2.0131 0.5 463 1.8398 39.7325 55.0682
1.8343 1.0 926 1.6799 42.9545 58.8261
1.6407 1.5 1389 1.6097 45.1502 61.1235
1.6021 2.0 1852 1.5634 46.0167 62.5172
1.4841 2.5 2315 1.5438 46.5971 63.4037
1.5117 3.0 2778 1.5080 47.2028 64.1405
1.3752 3.5 3241 1.5149 47.6487 64.3195
1.3491 4.0 3704 1.4956 47.8927 64.4993
1.3068 4.5 4167 1.4951 48.0861 64.7876
1.2916 5.0 4630 1.4895 48.3722 64.9676
1.2593 5.5 5093 1.4954 48.5909 65.1206
1.2032 6.0 5556 1.4891 48.5236 65.0831
1.1826 6.5 6019 1.4944 48.6077 65.0162
1.2159 7.0 6482 1.4870 48.9526 65.1941
1.1503 7.5 6945 1.5074 48.8180 65.3672
1.1683 8.0 7408 1.4928 48.7760 65.2063
1.0898 8.5 7871 1.5141 48.7844 65.0996
1.1217 9.0 8334 1.5061 48.9695 65.3139

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu117
  • Datasets 2.2.0
  • Tokenizers 0.13.2
Downloads last month
8

Collection including muhammadravi251001/fine-tuned-DatasetQAS-Squad-ID-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05