layoutlm-funsd
This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the funsd dataset. It achieves the following results on the evaluation set:
- Loss: 0.6862
- Answer: {'precision': 0.7194323144104804, 'recall': 0.8145859085290482, 'f1': 0.7640579710144928, 'number': 809}
- Header: {'precision': 0.32575757575757575, 'recall': 0.36134453781512604, 'f1': 0.3426294820717131, 'number': 119}
- Question: {'precision': 0.7848324514991182, 'recall': 0.8356807511737089, 'f1': 0.8094588449295135, 'number': 1065}
- Overall Precision: 0.7296
- Overall Recall: 0.7988
- Overall F1: 0.7626
- Overall Accuracy: 0.8087
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
1.8086 | 1.0 | 10 | 1.6430 | {'precision': 0.022690437601296597, 'recall': 0.0173053152039555, 'f1': 0.019635343618513323, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.16427432216905902, 'recall': 0.09671361502347418, 'f1': 0.12174940898345155, 'number': 1065} | 0.0941 | 0.0587 | 0.0723 | 0.3405 |
1.4748 | 2.0 | 20 | 1.2786 | {'precision': 0.10554951033732318, 'recall': 0.11990111248454882, 'f1': 0.11226851851851853, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.444121915820029, 'recall': 0.5746478873239437, 'f1': 0.5010233319688907, 'number': 1065} | 0.3084 | 0.3557 | 0.3304 | 0.5588 |
1.1284 | 3.0 | 30 | 0.9553 | {'precision': 0.3946236559139785, 'recall': 0.453646477132262, 'f1': 0.42208165612420934, 'number': 809} | {'precision': 0.08888888888888889, 'recall': 0.03361344537815126, 'f1': 0.048780487804878044, 'number': 119} | {'precision': 0.5839813374805599, 'recall': 0.7051643192488263, 'f1': 0.6388770735857082, 'number': 1065} | 0.4962 | 0.5630 | 0.5275 | 0.6832 |
0.8672 | 4.0 | 40 | 0.7918 | {'precision': 0.5623800383877159, 'recall': 0.7243510506798516, 'f1': 0.6331712587790382, 'number': 809} | {'precision': 0.1388888888888889, 'recall': 0.08403361344537816, 'f1': 0.10471204188481677, 'number': 119} | {'precision': 0.6632213608957795, 'recall': 0.7230046948356808, 'f1': 0.6918238993710691, 'number': 1065} | 0.6004 | 0.6854 | 0.6401 | 0.7442 |
0.6765 | 5.0 | 50 | 0.7113 | {'precision': 0.6314699792960663, 'recall': 0.754017305315204, 'f1': 0.6873239436619719, 'number': 809} | {'precision': 0.24705882352941178, 'recall': 0.17647058823529413, 'f1': 0.20588235294117646, 'number': 119} | {'precision': 0.6750788643533123, 'recall': 0.8037558685446009, 'f1': 0.7338191170167166, 'number': 1065} | 0.6412 | 0.7461 | 0.6897 | 0.7830 |
0.5707 | 6.0 | 60 | 0.6747 | {'precision': 0.6663179916317992, 'recall': 0.7873918417799752, 'f1': 0.7218130311614731, 'number': 809} | {'precision': 0.26136363636363635, 'recall': 0.19327731092436976, 'f1': 0.22222222222222224, 'number': 119} | {'precision': 0.7429062768701634, 'recall': 0.8112676056338028, 'f1': 0.7755834829443448, 'number': 1065} | 0.6905 | 0.7647 | 0.7257 | 0.7920 |
0.4871 | 7.0 | 70 | 0.6502 | {'precision': 0.6859243697478992, 'recall': 0.8071693448702101, 'f1': 0.7416240772288473, 'number': 809} | {'precision': 0.2523364485981308, 'recall': 0.226890756302521, 'f1': 0.23893805309734512, 'number': 119} | {'precision': 0.7589519650655022, 'recall': 0.815962441314554, 'f1': 0.7864253393665158, 'number': 1065} | 0.7028 | 0.7772 | 0.7381 | 0.8028 |
0.4357 | 8.0 | 80 | 0.6594 | {'precision': 0.6676798378926039, 'recall': 0.8145859085290482, 'f1': 0.7338530066815145, 'number': 809} | {'precision': 0.23577235772357724, 'recall': 0.24369747899159663, 'f1': 0.23966942148760334, 'number': 119} | {'precision': 0.7508620689655172, 'recall': 0.8178403755868544, 'f1': 0.7829213483146067, 'number': 1065} | 0.6868 | 0.7822 | 0.7314 | 0.8064 |
0.3867 | 9.0 | 90 | 0.6566 | {'precision': 0.7128712871287128, 'recall': 0.8009888751545118, 'f1': 0.7543655413271246, 'number': 809} | {'precision': 0.29133858267716534, 'recall': 0.31092436974789917, 'f1': 0.3008130081300813, 'number': 119} | {'precision': 0.7798408488063661, 'recall': 0.828169014084507, 'f1': 0.8032786885245902, 'number': 1065} | 0.7231 | 0.7863 | 0.7534 | 0.8070 |
0.3779 | 10.0 | 100 | 0.6605 | {'precision': 0.6875, 'recall': 0.8158220024721878, 'f1': 0.746184284906727, 'number': 809} | {'precision': 0.272, 'recall': 0.2857142857142857, 'f1': 0.27868852459016397, 'number': 119} | {'precision': 0.7766143106457243, 'recall': 0.8356807511737089, 'f1': 0.8050655811849842, 'number': 1065} | 0.7100 | 0.7948 | 0.75 | 0.8088 |
0.3227 | 11.0 | 110 | 0.6706 | {'precision': 0.710412147505423, 'recall': 0.8096415327564895, 'f1': 0.7567879838243791, 'number': 809} | {'precision': 0.291044776119403, 'recall': 0.3277310924369748, 'f1': 0.308300395256917, 'number': 119} | {'precision': 0.7725694444444444, 'recall': 0.8356807511737089, 'f1': 0.8028867839422643, 'number': 1065} | 0.7174 | 0.7948 | 0.7541 | 0.8059 |
0.3081 | 12.0 | 120 | 0.6774 | {'precision': 0.7122844827586207, 'recall': 0.8170580964153276, 'f1': 0.7610823258491652, 'number': 809} | {'precision': 0.31496062992125984, 'recall': 0.33613445378151263, 'f1': 0.3252032520325203, 'number': 119} | {'precision': 0.7834960070984915, 'recall': 0.8291079812206573, 'f1': 0.8056569343065694, 'number': 1065} | 0.7259 | 0.7948 | 0.7588 | 0.8102 |
0.2897 | 13.0 | 130 | 0.6820 | {'precision': 0.7113513513513513, 'recall': 0.8133498145859085, 'f1': 0.7589388696655133, 'number': 809} | {'precision': 0.32, 'recall': 0.33613445378151263, 'f1': 0.3278688524590164, 'number': 119} | {'precision': 0.7851458885941645, 'recall': 0.8338028169014085, 'f1': 0.8087431693989071, 'number': 1065} | 0.7272 | 0.7958 | 0.7599 | 0.8084 |
0.2651 | 14.0 | 140 | 0.6845 | {'precision': 0.7214912280701754, 'recall': 0.8133498145859085, 'f1': 0.7646717024985473, 'number': 809} | {'precision': 0.3333333333333333, 'recall': 0.36134453781512604, 'f1': 0.34677419354838707, 'number': 119} | {'precision': 0.7855887521968365, 'recall': 0.8394366197183099, 'f1': 0.8116205174761689, 'number': 1065} | 0.7320 | 0.8003 | 0.7646 | 0.8082 |
0.2678 | 15.0 | 150 | 0.6862 | {'precision': 0.7194323144104804, 'recall': 0.8145859085290482, 'f1': 0.7640579710144928, 'number': 809} | {'precision': 0.32575757575757575, 'recall': 0.36134453781512604, 'f1': 0.3426294820717131, 'number': 119} | {'precision': 0.7848324514991182, 'recall': 0.8356807511737089, 'f1': 0.8094588449295135, 'number': 1065} | 0.7296 | 0.7988 | 0.7626 | 0.8087 |
Framework versions
- Transformers 4.48.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 36
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for apriliantono/layoutlm-funsd
Base model
microsoft/layoutlm-base-uncased