hedronstone commited on
Commit
a69ad14
1 Parent(s): 98e14c4

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 3.4541
18
 
19
  ## Model description
20
 
@@ -33,26 +33,28 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 2e-05
37
  - train_batch_size: 16
38
  - eval_batch_size: 16
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
- - num_epochs: 3
43
 
44
  ### Training results
45
 
46
  | Training Loss | Epoch | Step | Validation Loss |
47
  |:-------------:|:-----:|:----:|:---------------:|
48
- | No log | 1.0 | 51 | 3.9390 |
49
- | No log | 2.0 | 102 | 3.4993 |
50
- | No log | 3.0 | 153 | 3.4541 |
 
 
51
 
52
 
53
  ### Framework versions
54
 
55
  - Transformers 4.18.0
56
  - Pytorch 1.13.1+cu116
57
- - Datasets 2.8.0
58
  - Tokenizers 0.12.1
 
14
 
15
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 3.2692
18
 
19
  ## Model description
20
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 1e-05
37
  - train_batch_size: 16
38
  - eval_batch_size: 16
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
+ - num_epochs: 5
43
 
44
  ### Training results
45
 
46
  | Training Loss | Epoch | Step | Validation Loss |
47
  |:-------------:|:-----:|:----:|:---------------:|
48
+ | No log | 1.0 | 64 | 4.0862 |
49
+ | No log | 2.0 | 128 | 3.5410 |
50
+ | No log | 3.0 | 192 | 3.3521 |
51
+ | No log | 4.0 | 256 | 3.2809 |
52
+ | No log | 5.0 | 320 | 3.2692 |
53
 
54
 
55
  ### Framework versions
56
 
57
  - Transformers 4.18.0
58
  - Pytorch 1.13.1+cu116
59
+ - Datasets 2.9.0
60
  - Tokenizers 0.12.1