layoutlm-FUNSDxSynthetic
This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.6138
- Eader: {'precision': 0.4098360655737705, 'recall': 0.30120481927710846, 'f1': 0.34722222222222227, 'number': 83}
- Nswer: {'precision': 0.45525291828793774, 'recall': 0.5707317073170731, 'f1': 0.5064935064935064, 'number': 205}
- Uestion: {'precision': 0.3793103448275862, 'recall': 0.42857142857142855, 'f1': 0.4024390243902439, 'number': 231}
- Overall Precision: 0.4162
- Overall Recall: 0.4644
- Overall F1: 0.4390
- Overall Accuracy: 0.7750
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 9
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Eader | Nswer | Uestion | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
1.198 | 1.0 | 12 | 1.0274 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 83} | {'precision': 0.11785714285714285, 'recall': 0.32195121951219513, 'f1': 0.17254901960784313, 'number': 205} | {'precision': 0.11732851985559567, 'recall': 0.2813852813852814, 'f1': 0.16560509554140126, 'number': 231} | 0.1176 | 0.2524 | 0.1604 | 0.6381 |
0.9302 | 2.0 | 24 | 0.7826 | {'precision': 0.16666666666666666, 'recall': 0.012048192771084338, 'f1': 0.02247191011235955, 'number': 83} | {'precision': 0.21844660194174756, 'recall': 0.43902439024390244, 'f1': 0.2917341977309562, 'number': 205} | {'precision': 0.2109375, 'recall': 0.35064935064935066, 'f1': 0.2634146341463415, 'number': 231} | 0.2145 | 0.3314 | 0.2604 | 0.7183 |
0.7111 | 3.0 | 36 | 0.6407 | {'precision': 0.1794871794871795, 'recall': 0.08433734939759036, 'f1': 0.11475409836065575, 'number': 83} | {'precision': 0.3432343234323432, 'recall': 0.5073170731707317, 'f1': 0.40944881889763785, 'number': 205} | {'precision': 0.303886925795053, 'recall': 0.3722943722943723, 'f1': 0.3346303501945525, 'number': 231} | 0.3152 | 0.3796 | 0.3444 | 0.7782 |
0.5314 | 4.0 | 48 | 0.6422 | {'precision': 0.21666666666666667, 'recall': 0.1566265060240964, 'f1': 0.18181818181818182, 'number': 83} | {'precision': 0.3985239852398524, 'recall': 0.526829268292683, 'f1': 0.45378151260504207, 'number': 205} | {'precision': 0.373015873015873, 'recall': 0.4069264069264069, 'f1': 0.38923395445134573, 'number': 231} | 0.3688 | 0.4143 | 0.3902 | 0.7626 |
0.4782 | 5.0 | 60 | 0.5865 | {'precision': 0.3114754098360656, 'recall': 0.2289156626506024, 'f1': 0.2638888888888889, 'number': 83} | {'precision': 0.4036363636363636, 'recall': 0.5414634146341464, 'f1': 0.4625000000000001, 'number': 205} | {'precision': 0.336996336996337, 'recall': 0.39826839826839827, 'f1': 0.3650793650793651, 'number': 231} | 0.3645 | 0.4277 | 0.3936 | 0.7784 |
0.3789 | 6.0 | 72 | 0.6069 | {'precision': 0.3220338983050847, 'recall': 0.2289156626506024, 'f1': 0.2676056338028169, 'number': 83} | {'precision': 0.4367816091954023, 'recall': 0.5560975609756098, 'f1': 0.4892703862660945, 'number': 205} | {'precision': 0.37401574803149606, 'recall': 0.41125541125541126, 'f1': 0.3917525773195876, 'number': 231} | 0.3972 | 0.4393 | 0.4172 | 0.7696 |
0.3423 | 7.0 | 84 | 0.6048 | {'precision': 0.375, 'recall': 0.25301204819277107, 'f1': 0.3021582733812949, 'number': 83} | {'precision': 0.42911877394636017, 'recall': 0.5463414634146342, 'f1': 0.48068669527896996, 'number': 205} | {'precision': 0.39215686274509803, 'recall': 0.4329004329004329, 'f1': 0.411522633744856, 'number': 231} | 0.4073 | 0.4489 | 0.4271 | 0.7782 |
0.2995 | 8.0 | 96 | 0.6146 | {'precision': 0.3709677419354839, 'recall': 0.27710843373493976, 'f1': 0.31724137931034485, 'number': 83} | {'precision': 0.43346007604562736, 'recall': 0.5560975609756098, 'f1': 0.48717948717948717, 'number': 205} | {'precision': 0.3787878787878788, 'recall': 0.4329004329004329, 'f1': 0.40404040404040403, 'number': 231} | 0.4024 | 0.4566 | 0.4278 | 0.7758 |
0.2774 | 9.0 | 108 | 0.6138 | {'precision': 0.4098360655737705, 'recall': 0.30120481927710846, 'f1': 0.34722222222222227, 'number': 83} | {'precision': 0.45525291828793774, 'recall': 0.5707317073170731, 'f1': 0.5064935064935064, 'number': 205} | {'precision': 0.3793103448275862, 'recall': 0.42857142857142855, 'f1': 0.4024390243902439, 'number': 231} | 0.4162 | 0.4644 | 0.4390 | 0.7750 |
Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
- Downloads last month
- 9
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for pabloma09/layoutlm-FUNSDxSynthetic
Base model
microsoft/layoutlm-base-uncased