File size: 8,025 Bytes
41ab1f5 ffd9c00 41ab1f5 ffd9c00 41ab1f5 ffd9c00 41ab1f5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutlm-funsd2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd2
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6614
- Answer: {'precision': 0.6683778234086243, 'recall': 0.8046971569839307, 'f1': 0.7302299495232752, 'number': 809}
- Header: {'precision': 0.3130434782608696, 'recall': 0.3025210084033613, 'f1': 0.3076923076923077, 'number': 119}
- Question: {'precision': 0.7667814113597247, 'recall': 0.8366197183098592, 'f1': 0.8001796138302649, 'number': 1065}
- Overall Precision: 0.7010
- Overall Recall: 0.7918
- Overall F1: 0.7436
- Overall Accuracy: 0.8029
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.8071 | 1.0 | 10 | 1.5850 | {'precision': 0.011918951132300357, 'recall': 0.012360939431396786, 'f1': 0.012135922330097087, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.17111459968602827, 'recall': 0.10234741784037558, 'f1': 0.1280846063454759, 'number': 1065} | 0.0806 | 0.0597 | 0.0686 | 0.3795 |
| 1.4934 | 2.0 | 20 | 1.2707 | {'precision': 0.09924812030075188, 'recall': 0.0815822002472188, 'f1': 0.08955223880597016, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4546952224052718, 'recall': 0.5183098591549296, 'f1': 0.484422992540588, 'number': 1065} | 0.3289 | 0.3101 | 0.3192 | 0.5753 |
| 1.1823 | 3.0 | 30 | 0.9970 | {'precision': 0.4033214709371293, 'recall': 0.42027194066749074, 'f1': 0.4116222760290557, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.5919540229885057, 'recall': 0.6769953051643193, 'f1': 0.6316250547525186, 'number': 1065} | 0.5106 | 0.5324 | 0.5212 | 0.6915 |
| 0.9185 | 4.0 | 40 | 0.8213 | {'precision': 0.6075156576200418, 'recall': 0.7194066749072929, 'f1': 0.6587436332767402, 'number': 809} | {'precision': 0.05128205128205128, 'recall': 0.01680672268907563, 'f1': 0.025316455696202535, 'number': 119} | {'precision': 0.6559048428207307, 'recall': 0.7248826291079812, 'f1': 0.6886708296164139, 'number': 1065} | 0.6237 | 0.6804 | 0.6508 | 0.7467 |
| 0.7233 | 5.0 | 50 | 0.7353 | {'precision': 0.638974358974359, 'recall': 0.7700865265760197, 'f1': 0.6984304932735426, 'number': 809} | {'precision': 0.22093023255813954, 'recall': 0.15966386554621848, 'f1': 0.18536585365853656, 'number': 119} | {'precision': 0.6809716599190283, 'recall': 0.7896713615023474, 'f1': 0.731304347826087, 'number': 1065} | 0.6459 | 0.7441 | 0.6915 | 0.7794 |
| 0.6262 | 6.0 | 60 | 0.7036 | {'precision': 0.632512315270936, 'recall': 0.7935723114956736, 'f1': 0.7039473684210525, 'number': 809} | {'precision': 0.24324324324324326, 'recall': 0.15126050420168066, 'f1': 0.18652849740932642, 'number': 119} | {'precision': 0.7235345581802275, 'recall': 0.7765258215962442, 'f1': 0.7490942028985508, 'number': 1065} | 0.6662 | 0.7461 | 0.7039 | 0.7818 |
| 0.5552 | 7.0 | 70 | 0.6694 | {'precision': 0.6639089968976215, 'recall': 0.7935723114956736, 'f1': 0.722972972972973, 'number': 809} | {'precision': 0.24770642201834864, 'recall': 0.226890756302521, 'f1': 0.23684210526315788, 'number': 119} | {'precision': 0.730999146029035, 'recall': 0.8037558685446009, 'f1': 0.7656529516994633, 'number': 1065} | 0.6787 | 0.7652 | 0.7193 | 0.7913 |
| 0.5016 | 8.0 | 80 | 0.6598 | {'precision': 0.6592517694641051, 'recall': 0.8059332509270705, 'f1': 0.7252502780867631, 'number': 809} | {'precision': 0.24324324324324326, 'recall': 0.226890756302521, 'f1': 0.23478260869565218, 'number': 119} | {'precision': 0.7482817869415808, 'recall': 0.8178403755868544, 'f1': 0.781516375056079, 'number': 1065} | 0.6846 | 0.7777 | 0.7282 | 0.7931 |
| 0.4496 | 9.0 | 90 | 0.6561 | {'precision': 0.6663265306122449, 'recall': 0.8071693448702101, 'f1': 0.7300167691447736, 'number': 809} | {'precision': 0.2743362831858407, 'recall': 0.2605042016806723, 'f1': 0.26724137931034486, 'number': 119} | {'precision': 0.7584708948740226, 'recall': 0.819718309859155, 'f1': 0.7879061371841156, 'number': 1065} | 0.6939 | 0.7812 | 0.7350 | 0.7982 |
| 0.4481 | 10.0 | 100 | 0.6633 | {'precision': 0.6711340206185566, 'recall': 0.8046971569839307, 'f1': 0.7318718381112984, 'number': 809} | {'precision': 0.29357798165137616, 'recall': 0.2689075630252101, 'f1': 0.28070175438596495, 'number': 119} | {'precision': 0.7640350877192983, 'recall': 0.8178403755868544, 'f1': 0.7900226757369614, 'number': 1065} | 0.7003 | 0.7797 | 0.7379 | 0.7987 |
| 0.4012 | 11.0 | 110 | 0.6624 | {'precision': 0.6625766871165644, 'recall': 0.8009888751545118, 'f1': 0.7252378287632905, 'number': 809} | {'precision': 0.3333333333333333, 'recall': 0.3025210084033613, 'f1': 0.3171806167400881, 'number': 119} | {'precision': 0.7696969696969697, 'recall': 0.8347417840375587, 'f1': 0.8009009009009008, 'number': 1065} | 0.7019 | 0.7893 | 0.7430 | 0.8074 |
| 0.4065 | 12.0 | 120 | 0.6614 | {'precision': 0.6683778234086243, 'recall': 0.8046971569839307, 'f1': 0.7302299495232752, 'number': 809} | {'precision': 0.3130434782608696, 'recall': 0.3025210084033613, 'f1': 0.3076923076923077, 'number': 119} | {'precision': 0.7667814113597247, 'recall': 0.8366197183098592, 'f1': 0.8001796138302649, 'number': 1065} | 0.7010 | 0.7918 | 0.7436 | 0.8029 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|