arubenruben commited on
Commit
3a58b7c
1 Parent(s): 0fc742b

End of training

Browse files
Files changed (2) hide show
  1. README.md +21 -15
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,24 +1,30 @@
1
  ---
2
  license: mit
3
- base_model: PORTULAN/albertina-100m-portuguese-ptpt-encoder
4
  tags:
5
  - generated_from_trainer
6
  metrics:
7
  - accuracy
 
 
 
8
  model-index:
9
- - name: LVI-dsl_tl-albertina-100m-portuguese-ptpt-encoder
10
  results: []
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
- # LVI-dsl_tl-albertina-100m-portuguese-ptpt-encoder
17
 
18
- This model is a fine-tuned version of [PORTULAN/albertina-100m-portuguese-ptpt-encoder](https://huggingface.co/PORTULAN/albertina-100m-portuguese-ptpt-encoder) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 1.0750
21
- - Accuracy: 0.6075
 
 
 
22
 
23
  ## Model description
24
 
@@ -37,23 +43,23 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - learning_rate: 3e-05
41
  - train_batch_size: 16
42
  - eval_batch_size: 16
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
- - num_epochs: 300
47
 
48
  ### Training results
49
 
50
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
52
- | No log | 1.0 | 217 | 0.8088 | 0.6539 |
53
- | No log | 2.0 | 434 | 0.7666 | 0.6660 |
54
- | 0.7597 | 3.0 | 651 | 0.9131 | 0.6509 |
55
- | 0.7597 | 4.0 | 868 | 0.9328 | 0.6206 |
56
- | 0.5046 | 5.0 | 1085 | 1.0750 | 0.6075 |
57
 
58
 
59
  ### Framework versions
 
1
  ---
2
  license: mit
3
+ base_model: neuralmind/bert-base-portuguese-cased
4
  tags:
5
  - generated_from_trainer
6
  metrics:
7
  - accuracy
8
+ - f1
9
+ - precision
10
+ - recall
11
  model-index:
12
+ - name: LVI_bert-base-portuguese-cased
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # LVI_bert-base-portuguese-cased
20
 
21
+ This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.2393
24
+ - Accuracy: 0.9428
25
+ - F1: 0.9445
26
+ - Precision: 0.9182
27
+ - Recall: 0.9723
28
 
29
  ## Model description
30
 
 
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
+ - learning_rate: 5e-05
47
  - train_batch_size: 16
48
  - eval_batch_size: 16
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
+ - num_epochs: 10
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
57
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
58
+ | 0.1736 | 1.0 | 3217 | 0.1532 | 0.9615 | 0.9618 | 0.955 | 0.9686 |
59
+ | 0.1105 | 2.0 | 6434 | 0.1464 | 0.9629 | 0.9630 | 0.9582 | 0.9679 |
60
+ | 0.0984 | 3.0 | 9651 | 0.2067 | 0.9525 | 0.9511 | 0.9786 | 0.9251 |
61
+ | 0.0996 | 4.0 | 12868 | 0.1873 | 0.9608 | 0.9610 | 0.9569 | 0.9651 |
62
+ | 0.17 | 5.0 | 16085 | 0.2393 | 0.9428 | 0.9445 | 0.9182 | 0.9723 |
63
 
64
 
65
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:879bc2d61a8a685b80191fb0708ceb8d390e0393d35d33edf643aa3b2e346f71
3
  size 435722224
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:583e6912dcc85b3f4abef369a123bfc1ad06e12e7f142203744c0c6452a740e5
3
  size 435722224