Edit model card

fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05

This model is a fine-tuned version of indolem/indobert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2784
  • Exact Match: 53.4392
  • F1: 68.7244

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Exact Match F1
6.1764 0.5 19 3.7674 10.4056 23.6332
6.1764 1.0 38 2.7985 19.5767 32.6228
3.8085 1.49 57 2.4169 22.0459 35.4084
3.8085 1.99 76 2.2811 25.9259 38.3963
3.8085 2.49 95 2.1607 28.0423 40.3901
2.3932 2.99 114 2.0488 31.0406 43.7059
2.3932 3.49 133 1.9787 34.3915 46.3655
2.0772 3.98 152 1.8661 37.2134 49.1483
2.0772 4.48 171 1.7893 40.2116 52.4989
2.0772 4.98 190 1.7014 41.9753 54.9197
1.7645 5.48 209 1.5940 44.2681 58.2134
1.7645 5.98 228 1.4972 46.2081 60.4997
1.7645 6.47 247 1.4214 48.8536 63.4371
1.5035 6.97 266 1.3676 50.6173 65.4663
1.5035 7.47 285 1.3357 52.2046 67.1759
1.3206 7.97 304 1.3149 53.0864 68.0698
1.3206 8.47 323 1.2988 53.4392 68.3971
1.3206 8.96 342 1.2894 53.6155 68.8897
1.2472 9.46 361 1.2820 53.4392 68.5835
1.2472 9.96 380 1.2784 53.4392 68.7244

Framework versions

  • Transformers 4.27.4
  • Pytorch 1.13.1+cu117
  • Datasets 2.2.0
  • Tokenizers 0.13.2
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including muhammadravi251001/fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-base-uncased-with-ITTL-without-freeze-LR-1e-05