JoshuaAAX commited on
Commit
53a7edb
1 Parent(s): 03e19e7

Training complete

Browse files
Files changed (1) hide show
  1. README.md +44 -15
README.md CHANGED
@@ -3,6 +3,8 @@ license: apache-2.0
3
  base_model: bert-base-cased
4
  tags:
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - precision
8
  - recall
@@ -10,7 +12,29 @@ metrics:
10
  - accuracy
11
  model-index:
12
  - name: bert-finetuned-ner
13
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -18,13 +42,13 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # bert-finetuned-ner
20
 
21
- This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.1471
24
- - Precision: 0.7369
25
- - Recall: 0.7943
26
- - F1: 0.7646
27
- - Accuracy: 0.9666
28
 
29
  ## Model description
30
 
@@ -49,22 +73,27 @@ The following hyperparameters were used during training:
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
- - num_epochs: 5
53
 
54
  ### Training results
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
57
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
58
- | 0.1691 | 1.0 | 521 | 0.1438 | 0.6830 | 0.7371 | 0.7090 | 0.9587 |
59
- | 0.076 | 2.0 | 1042 | 0.1402 | 0.7075 | 0.7670 | 0.7361 | 0.9622 |
60
- | 0.05 | 3.0 | 1563 | 0.1332 | 0.7536 | 0.7971 | 0.7748 | 0.9672 |
61
- | 0.0359 | 4.0 | 2084 | 0.1442 | 0.7420 | 0.7845 | 0.7626 | 0.9663 |
62
- | 0.0265 | 5.0 | 2605 | 0.1471 | 0.7369 | 0.7943 | 0.7646 | 0.9666 |
 
 
 
 
 
63
 
64
 
65
  ### Framework versions
66
 
67
- - Transformers 4.40.2
68
- - Pytorch 2.2.1+cu121
69
  - Datasets 2.19.1
70
  - Tokenizers 0.19.1
 
3
  base_model: bert-base-cased
4
  tags:
5
  - generated_from_trainer
6
+ datasets:
7
+ - conll2002
8
  metrics:
9
  - precision
10
  - recall
 
12
  - accuracy
13
  model-index:
14
  - name: bert-finetuned-ner
15
+ results:
16
+ - task:
17
+ name: Token Classification
18
+ type: token-classification
19
+ dataset:
20
+ name: conll2002
21
+ type: conll2002
22
+ config: es
23
+ split: validation
24
+ args: es
25
+ metrics:
26
+ - name: Precision
27
+ type: precision
28
+ value: 0.7640546993705232
29
+ - name: Recall
30
+ type: recall
31
+ value: 0.8088235294117647
32
+ - name: F1
33
+ type: f1
34
+ value: 0.7858019868288871
35
+ - name: Accuracy
36
+ type: accuracy
37
+ value: 0.9676902769959431
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
42
 
43
  # bert-finetuned-ner
44
 
45
+ This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2002 dataset.
46
  It achieves the following results on the evaluation set:
47
+ - Loss: 0.1912
48
+ - Precision: 0.7641
49
+ - Recall: 0.8088
50
+ - F1: 0.7858
51
+ - Accuracy: 0.9677
52
 
53
  ## Model description
54
 
 
73
  - seed: 42
74
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
75
  - lr_scheduler_type: linear
76
+ - num_epochs: 10
77
 
78
  ### Training results
79
 
80
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
81
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
82
+ | 0.1713 | 1.0 | 521 | 0.1404 | 0.6859 | 0.7387 | 0.7114 | 0.9599 |
83
+ | 0.0761 | 2.0 | 1042 | 0.1404 | 0.6822 | 0.7693 | 0.7231 | 0.9623 |
84
+ | 0.05 | 3.0 | 1563 | 0.1304 | 0.7488 | 0.7937 | 0.7706 | 0.9672 |
85
+ | 0.0355 | 4.0 | 2084 | 0.1454 | 0.7585 | 0.7960 | 0.7768 | 0.9664 |
86
+ | 0.0253 | 5.0 | 2605 | 0.1501 | 0.7549 | 0.8095 | 0.7812 | 0.9677 |
87
+ | 0.0184 | 6.0 | 3126 | 0.1726 | 0.7581 | 0.7992 | 0.7781 | 0.9662 |
88
+ | 0.0138 | 7.0 | 3647 | 0.1743 | 0.7524 | 0.8042 | 0.7774 | 0.9676 |
89
+ | 0.0112 | 8.0 | 4168 | 0.1853 | 0.7576 | 0.8022 | 0.7792 | 0.9674 |
90
+ | 0.0082 | 9.0 | 4689 | 0.1914 | 0.7595 | 0.8061 | 0.7821 | 0.9667 |
91
+ | 0.0073 | 10.0 | 5210 | 0.1912 | 0.7641 | 0.8088 | 0.7858 | 0.9677 |
92
 
93
 
94
  ### Framework versions
95
 
96
+ - Transformers 4.41.0
97
+ - Pytorch 2.3.0+cu121
98
  - Datasets 2.19.1
99
  - Tokenizers 0.19.1