mp-02 commited on
Commit
8ceaebc
·
verified ·
1 Parent(s): 21980f1

End of training

Browse files
Files changed (1) hide show
  1. README.md +42 -34
README.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- library_name: transformers
3
- license: cc-by-nc-sa-4.0
4
- base_model: microsoft/layoutlmv3-base
5
  tags:
6
  - generated_from_trainer
 
 
7
  metrics:
8
  - precision
9
  - recall
@@ -11,7 +11,26 @@ metrics:
11
  - accuracy
12
  model-index:
13
  - name: layoutlmv3-finetuned-cord
14
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -19,13 +38,13 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  # layoutlmv3-finetuned-cord
21
 
22
- This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 0.1814
25
- - Precision: 0.9647
26
- - Recall: 0.9767
27
- - F1: 0.9707
28
- - Accuracy: 0.9690
29
 
30
  ## Model description
31
 
@@ -44,42 +63,31 @@ More information needed
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
47
- - learning_rate: 3e-05
48
  - train_batch_size: 10
49
  - eval_batch_size: 10
50
  - seed: 42
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
- - training_steps: 1500
54
 
55
  ### Training results
56
 
57
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
58
- |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
59
- | No log | 1.0 | 80 | 0.9211 | 0.7047 | 0.7911 | 0.7454 | 0.7907 |
60
- | No log | 2.0 | 160 | 0.4572 | 0.8407 | 0.8975 | 0.8682 | 0.8901 |
61
- | No log | 3.0 | 240 | 0.3138 | 0.9162 | 0.9503 | 0.9329 | 0.9372 |
62
- | No log | 4.0 | 320 | 0.2546 | 0.9264 | 0.9581 | 0.9420 | 0.9474 |
63
- | No log | 5.0 | 400 | 0.2299 | 0.9329 | 0.9612 | 0.9468 | 0.9508 |
64
- | No log | 6.0 | 480 | 0.2277 | 0.9476 | 0.9682 | 0.9578 | 0.9597 |
65
- | 0.5639 | 7.0 | 560 | 0.1962 | 0.9542 | 0.9713 | 0.9627 | 0.9656 |
66
- | 0.5639 | 8.0 | 640 | 0.2130 | 0.9538 | 0.9612 | 0.9575 | 0.9584 |
67
- | 0.5639 | 9.0 | 720 | 0.2286 | 0.9451 | 0.9627 | 0.9538 | 0.9563 |
68
- | 0.5639 | 10.0 | 800 | 0.1797 | 0.9587 | 0.9736 | 0.9661 | 0.9665 |
69
- | 0.5639 | 11.0 | 880 | 0.1962 | 0.9580 | 0.9736 | 0.9657 | 0.9665 |
70
- | 0.5639 | 12.0 | 960 | 0.2051 | 0.9579 | 0.9720 | 0.9649 | 0.9652 |
71
- | 0.0563 | 13.0 | 1040 | 0.1768 | 0.9633 | 0.9775 | 0.9703 | 0.9694 |
72
- | 0.0563 | 14.0 | 1120 | 0.1745 | 0.9617 | 0.9759 | 0.9688 | 0.9699 |
73
- | 0.0563 | 15.0 | 1200 | 0.1795 | 0.9632 | 0.9752 | 0.9691 | 0.9682 |
74
- | 0.0563 | 16.0 | 1280 | 0.1805 | 0.9640 | 0.9767 | 0.9703 | 0.9690 |
75
- | 0.0563 | 17.0 | 1360 | 0.1819 | 0.9610 | 0.9759 | 0.9684 | 0.9690 |
76
- | 0.0563 | 18.0 | 1440 | 0.1802 | 0.9617 | 0.9759 | 0.9688 | 0.9686 |
77
- | 0.0184 | 18.75 | 1500 | 0.1814 | 0.9647 | 0.9767 | 0.9707 | 0.9690 |
78
 
79
 
80
  ### Framework versions
81
 
82
- - Transformers 4.44.2
83
  - Pytorch 2.4.0+cu118
84
  - Datasets 2.21.0
85
  - Tokenizers 0.19.1
 
1
  ---
2
+ base_model: layoutlmv3
 
 
3
  tags:
4
  - generated_from_trainer
5
+ datasets:
6
+ - mp-02/cord
7
  metrics:
8
  - precision
9
  - recall
 
11
  - accuracy
12
  model-index:
13
  - name: layoutlmv3-finetuned-cord
14
+ results:
15
+ - task:
16
+ name: Token Classification
17
+ type: token-classification
18
+ dataset:
19
+ name: mp-02/cord
20
+ type: mp-02/cord
21
+ metrics:
22
+ - name: Precision
23
+ type: precision
24
+ value: 0.9572519083969465
25
+ - name: Recall
26
+ type: recall
27
+ value: 0.9736024844720497
28
+ - name: F1
29
+ type: f1
30
+ value: 0.9653579676674365
31
+ - name: Accuracy
32
+ type: accuracy
33
+ value: 0.9673174872665535
34
  ---
35
 
36
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
38
 
39
  # layoutlmv3-finetuned-cord
40
 
41
+ This model is a fine-tuned version of [layoutlmv3](https://huggingface.co/layoutlmv3) on the mp-02/cord dataset.
42
  It achieves the following results on the evaluation set:
43
+ - Loss: 0.1831
44
+ - Precision: 0.9573
45
+ - Recall: 0.9736
46
+ - F1: 0.9654
47
+ - Accuracy: 0.9673
48
 
49
  ## Model description
50
 
 
63
  ### Training hyperparameters
64
 
65
  The following hyperparameters were used during training:
66
+ - learning_rate: 1e-05
67
  - train_batch_size: 10
68
  - eval_batch_size: 10
69
  - seed: 42
70
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
71
  - lr_scheduler_type: linear
72
+ - training_steps: 2000
73
 
74
  ### Training results
75
 
76
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
77
+ |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
78
+ | No log | 3.125 | 250 | 0.7551 | 0.7974 | 0.8587 | 0.8269 | 0.8544 |
79
+ | 1.1001 | 6.25 | 500 | 0.3822 | 0.8846 | 0.9286 | 0.9061 | 0.9215 |
80
+ | 1.1001 | 9.375 | 750 | 0.2750 | 0.9334 | 0.9581 | 0.9456 | 0.9444 |
81
+ | 0.2309 | 12.5 | 1000 | 0.2072 | 0.9439 | 0.9674 | 0.9555 | 0.9605 |
82
+ | 0.2309 | 15.625 | 1250 | 0.1934 | 0.9500 | 0.9728 | 0.9613 | 0.9652 |
83
+ | 0.1003 | 18.75 | 1500 | 0.1898 | 0.9602 | 0.9736 | 0.9668 | 0.9665 |
84
+ | 0.1003 | 21.875 | 1750 | 0.2032 | 0.9542 | 0.9705 | 0.9623 | 0.9631 |
85
+ | 0.0637 | 25.0 | 2000 | 0.1831 | 0.9573 | 0.9736 | 0.9654 | 0.9673 |
 
 
 
 
 
 
 
 
 
 
 
86
 
87
 
88
  ### Framework versions
89
 
90
+ - Transformers 4.42.4
91
  - Pytorch 2.4.0+cu118
92
  - Datasets 2.21.0
93
  - Tokenizers 0.19.1