cfa213's picture
End of training
ee95f7b verified
metadata
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv2-base-uncased
tags:
  - generated_from_trainer
model-index:
  - name: layoutlmv2-base-uncased_finetuned_docvqa
    results: []

layoutlmv2-base-uncased_finetuned_docvqa

This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.6446

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss
5.1899 0.2212 50 4.5198
4.3841 0.4425 100 3.9910
4.0167 0.6637 150 3.9030
3.7906 0.8850 200 3.4974
3.426 1.1062 250 3.7834
3.0774 1.3274 300 3.2013
2.9541 1.5487 350 3.0711
2.7072 1.7699 400 2.7067
2.3127 1.9912 450 2.6800
2.0786 2.2124 500 2.6677
1.9055 2.4336 550 2.5482
2.0272 2.6549 600 2.2344
1.6753 2.8761 650 2.3265
1.4848 3.0973 700 2.5170
1.4359 3.3186 750 2.4527
1.2884 3.5398 800 2.4033
1.3217 3.7611 850 2.0981
1.3359 3.9823 900 2.2481
0.9068 4.2035 950 2.4053
1.1537 4.4248 1000 2.5739
0.8742 4.6460 1050 2.5003
0.9135 4.8673 1100 2.5511
0.9073 5.0885 1150 2.6724
0.6596 5.3097 1200 2.6174
0.7797 5.5310 1250 2.8350
0.6031 5.7522 1300 3.1642
0.7287 5.9735 1350 3.0317
0.4586 6.1947 1400 3.1821
0.5755 6.4159 1450 3.1222
0.4174 6.6372 1500 3.6997
0.2839 6.8584 1550 3.7044
0.5378 7.0796 1600 3.4223
0.3549 7.3009 1650 3.3740
0.2239 7.5221 1700 3.6018
0.3209 7.7434 1750 3.3689
0.2594 7.9646 1800 3.6625
0.1606 8.1858 1850 3.6916
0.1525 8.4071 1900 3.6299
0.1104 8.6283 1950 3.7133
0.3046 8.8496 2000 3.7701
0.3161 9.0708 2050 3.6224
0.1331 9.2920 2100 3.6198
0.2595 9.5133 2150 3.6251
0.1928 9.7345 2200 3.6359
0.1465 9.9558 2250 3.6446

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1