Edit model card

layoutlmv2-base-uncased_finetuned_docvqa

This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 4.8270

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
5.2536 0.22 50 4.4585
4.2935 0.44 100 4.1809
4.0821 0.66 150 3.7822
3.8393 0.88 200 3.4892
3.4613 1.11 250 3.4728
2.9792 1.33 300 3.0083
2.9501 1.55 350 2.9516
2.8889 1.77 400 2.7119
2.4728 1.99 450 2.6027
1.9523 2.21 500 2.5411
1.9312 2.43 550 2.5406
1.8471 2.65 600 2.4484
1.7456 2.88 650 1.9633
1.4404 3.1 700 2.5037
1.3567 3.32 750 2.4763
1.3247 3.54 800 2.3191
1.2739 3.76 850 2.4416
1.2548 3.98 900 2.3970
0.8073 4.2 950 2.3073
0.8992 4.42 1000 2.6353
0.8439 4.65 1050 2.7353
0.9051 4.87 1100 2.4587
0.7201 5.09 1150 3.1759
0.6394 5.31 1200 3.2409
0.6142 5.53 1250 3.0543
0.6385 5.75 1300 2.5017
0.5985 5.97 1350 3.1824
0.2824 6.19 1400 3.3628
0.4695 6.42 1450 2.9493
0.6322 6.64 1500 2.8337
0.5114 6.86 1550 3.2387
0.4555 7.08 1600 3.3994
0.3308 7.3 1650 3.3562
0.2209 7.52 1700 3.5050
0.4407 7.74 1750 3.8571
0.442 7.96 1800 3.5946
0.2722 8.19 1850 3.6526
0.423 8.41 1900 3.1283
0.316 8.63 1950 3.4300
0.3435 8.85 2000 3.6418
0.2815 9.07 2050 3.4021
0.2051 9.29 2100 3.7000
0.2442 9.51 2150 3.4389
0.1833 9.73 2200 3.7243
0.2408 9.96 2250 3.6520
0.1971 10.18 2300 3.5589
0.196 10.4 2350 3.7747
0.2511 10.62 2400 3.5574
0.1473 10.84 2450 3.7469
0.1141 11.06 2500 3.7303
0.1058 11.28 2550 4.0495
0.1845 11.5 2600 3.8454
0.1548 11.73 2650 4.2611
0.1009 11.95 2700 4.1706
0.1412 12.17 2750 4.2072
0.0809 12.39 2800 4.2862
0.1126 12.61 2850 4.0895
0.0736 12.83 2900 4.2500
0.0525 13.05 2950 4.3110
0.0621 13.27 3000 4.1005
0.1199 13.5 3050 4.1956
0.0837 13.72 3100 4.6910
0.1912 13.94 3150 4.2988
0.0891 14.16 3200 4.3609
0.0478 14.38 3250 4.3829
0.0727 14.6 3300 4.2299
0.1063 14.82 3350 3.9861
0.0392 15.04 3400 4.1840
0.1334 15.27 3450 4.3275
0.0178 15.49 3500 4.3959
0.0917 15.71 3550 4.4633
0.0477 15.93 3600 4.5239
0.0058 16.15 3650 4.5556
0.0719 16.37 3700 4.5381
0.0412 16.59 3750 4.5468
0.0051 16.81 3800 4.6192
0.0437 17.04 3850 4.6033
0.0062 17.26 3900 4.7104
0.0374 17.48 3950 4.5678
0.0113 17.7 4000 4.6266
0.0616 17.92 4050 4.6180
0.0082 18.14 4100 4.7478
0.0034 18.36 4150 4.7665
0.0395 18.58 4200 4.7568
0.0108 18.81 4250 4.7385
0.015 19.03 4300 4.8266
0.1061 19.25 4350 4.8090
0.0181 19.47 4400 4.8226
0.0055 19.69 4450 4.8228
0.0134 19.91 4500 4.8270

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.10.1
  • Tokenizers 0.14.1
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for zibajoon/20231101_layoutlm2_1.2k_3ep_Doc_A

Finetuned
(62)
this model