AVSilva commited on
Commit
3659caf
1 Parent(s): f79c2a6

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -11
README.md CHANGED
@@ -7,9 +7,14 @@ model-index:
7
  results: []
8
  ---
9
 
 
 
 
 
 
10
  This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unknown dataset.
11
  It achieves the following results on the evaluation set:
12
- - Loss: 0.8031
13
 
14
  ## Model description
15
 
@@ -29,7 +34,7 @@ More information needed
29
 
30
  The following hyperparameters were used during training:
31
  - learning_rate: 5e-05
32
- - train_batch_size: 2
33
  - eval_batch_size: 8
34
  - seed: 42
35
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
@@ -38,19 +43,11 @@ The following hyperparameters were used during training:
38
 
39
  ### Training results
40
 
41
- ### Eval metrics
42
 
43
- epoch = 3.0
44
- eval_loss = 0.8031
45
- eval_runtime = 0:02:17.77
46
- eval_samples = 134
47
- eval_samples_per_second = 0.973
48
- eval_steps_per_second = 0.123
49
- perplexity = 2.2325
50
 
51
  ### Framework versions
52
 
53
  - Transformers 4.13.0.dev0
54
  - Pytorch 1.10.0+cu102
55
  - Datasets 1.16.1
56
- - Tokenizers 0.10.3
 
7
  results: []
8
  ---
9
 
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # result
14
+
15
  This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.7570
18
 
19
  ## Model description
20
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 5e-05
37
+ - train_batch_size: 4
38
  - eval_batch_size: 8
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
43
 
44
  ### Training results
45
 
 
46
 
 
 
 
 
 
 
 
47
 
48
  ### Framework versions
49
 
50
  - Transformers 4.13.0.dev0
51
  - Pytorch 1.10.0+cu102
52
  - Datasets 1.16.1
53
+ - Tokenizers 0.10.3