layoutlm-funsd
This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the funsd dataset. It achieves the following results on the evaluation set:
- Loss: 0.6604
- Answer: {'precision': 0.6801705756929638, 'recall': 0.788627935723115, 'f1': 0.7303949627933599, 'number': 809}
- Header: {'precision': 0.26744186046511625, 'recall': 0.19327731092436976, 'f1': 0.224390243902439, 'number': 119}
- Question: {'precision': 0.7273504273504273, 'recall': 0.7990610328638498, 'f1': 0.7615212527964206, 'number': 1065}
- Overall Precision: 0.6892
- Overall Recall: 0.7587
- Overall F1: 0.7222
- Overall Accuracy: 0.8010
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
1.8511 | 1.0 | 5 | 1.7020 | {'precision': 0.01627670396744659, 'recall': 0.019777503090234856, 'f1': 0.017857142857142856, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.1516533637400228, 'recall': 0.12488262910798122, 'f1': 0.1369721936148301, 'number': 1065} | 0.0801 | 0.0748 | 0.0773 | 0.3370 |
1.6307 | 2.0 | 10 | 1.5168 | {'precision': 0.02522935779816514, 'recall': 0.027194066749072928, 'f1': 0.026174895895300417, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.3611111111111111, 'recall': 0.3295774647887324, 'f1': 0.34462444771723116, 'number': 1065} | 0.2023 | 0.1872 | 0.1944 | 0.4120 |
1.4559 | 3.0 | 15 | 1.3180 | {'precision': 0.13717277486910995, 'recall': 0.1619283065512979, 'f1': 0.14852607709750568, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.43218954248366015, 'recall': 0.49671361502347416, 'f1': 0.46221057230231544, 'number': 1065} | 0.3029 | 0.3312 | 0.3164 | 0.5472 |
1.2506 | 4.0 | 20 | 1.1192 | {'precision': 0.3774283071230342, 'recall': 0.5043263288009888, 'f1': 0.4317460317460317, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.48876839659178933, 'recall': 0.5924882629107981, 'f1': 0.5356536502546689, 'number': 1065} | 0.4380 | 0.5213 | 0.4761 | 0.6396 |
1.0483 | 5.0 | 25 | 0.9527 | {'precision': 0.47378640776699027, 'recall': 0.6032138442521632, 'f1': 0.5307232191408373, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.5771653543307087, 'recall': 0.6882629107981221, 'f1': 0.6278372591006424, 'number': 1065} | 0.5299 | 0.6126 | 0.5683 | 0.7009 |
0.9059 | 6.0 | 30 | 0.8414 | {'precision': 0.5864978902953587, 'recall': 0.6872682323856613, 'f1': 0.6328969834945931, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.6246031746031746, 'recall': 0.7389671361502348, 'f1': 0.676989247311828, 'number': 1065} | 0.5988 | 0.6739 | 0.6341 | 0.7454 |
0.7949 | 7.0 | 35 | 0.7834 | {'precision': 0.6092066601371204, 'recall': 0.7688504326328801, 'f1': 0.6797814207650272, 'number': 809} | {'precision': 0.023809523809523808, 'recall': 0.008403361344537815, 'f1': 0.012422360248447206, 'number': 119} | {'precision': 0.6786632390745502, 'recall': 0.7436619718309859, 'f1': 0.7096774193548389, 'number': 1065} | 0.6345 | 0.7100 | 0.6701 | 0.7580 |
0.7124 | 8.0 | 40 | 0.7377 | {'precision': 0.6302083333333334, 'recall': 0.7478368355995055, 'f1': 0.6840022611644997, 'number': 809} | {'precision': 0.06451612903225806, 'recall': 0.03361344537815126, 'f1': 0.04419889502762431, 'number': 119} | {'precision': 0.6704089815557338, 'recall': 0.7849765258215963, 'f1': 0.7231833910034603, 'number': 1065} | 0.6368 | 0.7250 | 0.6781 | 0.7760 |
0.6462 | 9.0 | 45 | 0.7088 | {'precision': 0.6338742393509128, 'recall': 0.7725587144622992, 'f1': 0.6963788300835655, 'number': 809} | {'precision': 0.1780821917808219, 'recall': 0.1092436974789916, 'f1': 0.13541666666666666, 'number': 119} | {'precision': 0.7063829787234043, 'recall': 0.7793427230046949, 'f1': 0.7410714285714286, 'number': 1065} | 0.6571 | 0.7366 | 0.6946 | 0.7811 |
0.6024 | 10.0 | 50 | 0.6971 | {'precision': 0.6543340380549683, 'recall': 0.765142150803461, 'f1': 0.7054131054131054, 'number': 809} | {'precision': 0.1875, 'recall': 0.12605042016806722, 'f1': 0.1507537688442211, 'number': 119} | {'precision': 0.7119932432432432, 'recall': 0.7915492957746478, 'f1': 0.7496665184526455, 'number': 1065} | 0.6683 | 0.7411 | 0.7028 | 0.7907 |
0.573 | 11.0 | 55 | 0.6758 | {'precision': 0.6628630705394191, 'recall': 0.7898640296662547, 'f1': 0.7208121827411168, 'number': 809} | {'precision': 0.24358974358974358, 'recall': 0.15966386554621848, 'f1': 0.1928934010152284, 'number': 119} | {'precision': 0.7289313640312771, 'recall': 0.787793427230047, 'f1': 0.7572202166064983, 'number': 1065} | 0.6826 | 0.7511 | 0.7152 | 0.7949 |
0.5332 | 12.0 | 60 | 0.6668 | {'precision': 0.668054110301769, 'recall': 0.7935723114956736, 'f1': 0.7254237288135594, 'number': 809} | {'precision': 0.2875, 'recall': 0.19327731092436976, 'f1': 0.23115577889447236, 'number': 119} | {'precision': 0.7268041237113402, 'recall': 0.7943661971830986, 'f1': 0.7590847913862719, 'number': 1065} | 0.6853 | 0.7582 | 0.7199 | 0.7970 |
0.5127 | 13.0 | 65 | 0.6634 | {'precision': 0.674074074074074, 'recall': 0.7873918417799752, 'f1': 0.726339794754846, 'number': 809} | {'precision': 0.26744186046511625, 'recall': 0.19327731092436976, 'f1': 0.224390243902439, 'number': 119} | {'precision': 0.7270386266094421, 'recall': 0.7953051643192488, 'f1': 0.7596412556053813, 'number': 1065} | 0.6862 | 0.7561 | 0.7195 | 0.7994 |
0.4919 | 14.0 | 70 | 0.6614 | {'precision': 0.6751592356687898, 'recall': 0.7861557478368356, 'f1': 0.7264420331239292, 'number': 809} | {'precision': 0.27058823529411763, 'recall': 0.19327731092436976, 'f1': 0.22549019607843138, 'number': 119} | {'precision': 0.7242553191489361, 'recall': 0.7990610328638498, 'f1': 0.7598214285714285, 'number': 1065} | 0.6857 | 0.7577 | 0.7199 | 0.8017 |
0.4832 | 15.0 | 75 | 0.6604 | {'precision': 0.6801705756929638, 'recall': 0.788627935723115, 'f1': 0.7303949627933599, 'number': 809} | {'precision': 0.26744186046511625, 'recall': 0.19327731092436976, 'f1': 0.224390243902439, 'number': 119} | {'precision': 0.7273504273504273, 'recall': 0.7990610328638498, 'f1': 0.7615212527964206, 'number': 1065} | 0.6892 | 0.7587 | 0.7222 | 0.8010 |
Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for nehajiya8/layoutlm-funsd
Base model
microsoft/layoutlm-base-uncased