Tommert25 commited on
Commit
44a2d26
·
1 Parent(s): 3516c2d

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -14
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
  metrics:
@@ -19,11 +20,11 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.5260
23
- - Precision: 0.5933
24
- - Recall: 0.4839
25
- - F1: 0.5330
26
- - Accuracy: 0.8502
27
 
28
  ## Model description
29
 
@@ -43,8 +44,8 @@ More information needed
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 5e-05
46
- - train_batch_size: 16
47
- - eval_batch_size: 16
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
@@ -54,16 +55,16 @@ The following hyperparameters were used during training:
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
57
- | No log | 1.0 | 145 | 0.5260 | 0.5933 | 0.4839 | 0.5330 | 0.8502 |
58
- | No log | 2.0 | 290 | 0.5357 | 0.6099 | 0.5415 | 0.5736 | 0.8604 |
59
- | No log | 3.0 | 435 | 0.5476 | 0.6279 | 0.5795 | 0.6027 | 0.8715 |
60
- | 0.365 | 4.0 | 580 | 0.5861 | 0.6454 | 0.6107 | 0.6276 | 0.8827 |
61
- | 0.365 | 5.0 | 725 | 0.6235 | 0.6543 | 0.6185 | 0.6359 | 0.8804 |
62
 
63
 
64
  ### Framework versions
65
 
66
- - Transformers 4.30.2
67
  - Pytorch 2.0.1+cu118
68
- - Datasets 2.13.1
69
  - Tokenizers 0.13.3
 
1
  ---
2
  license: apache-2.0
3
+ base_model: bert-base-multilingual-uncased
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
20
 
21
  This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.4896
24
+ - Precision: 0.6282
25
+ - Recall: 0.5688
26
+ - F1: 0.5970
27
+ - Accuracy: 0.8756
28
 
29
  ## Model description
30
 
 
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 5e-05
47
+ - train_batch_size: 32
48
+ - eval_batch_size: 32
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
 
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
57
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
58
+ | No log | 1.0 | 73 | 0.5149 | 0.5414 | 0.4527 | 0.4931 | 0.8510 |
59
+ | No log | 2.0 | 146 | 0.6092 | 0.57 | 0.5005 | 0.5330 | 0.8464 |
60
+ | No log | 3.0 | 219 | 0.4896 | 0.6282 | 0.5688 | 0.5970 | 0.8756 |
61
+ | No log | 4.0 | 292 | 0.5196 | 0.6420 | 0.6176 | 0.6295 | 0.8764 |
62
+ | No log | 5.0 | 365 | 0.5270 | 0.6479 | 0.6176 | 0.6324 | 0.8786 |
63
 
64
 
65
  ### Framework versions
66
 
67
+ - Transformers 4.31.0
68
  - Pytorch 2.0.1+cu118
69
+ - Datasets 2.14.0
70
  - Tokenizers 0.13.3