janeel commited on
Commit
bbbb15c
1 Parent(s): d642ce8

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -3,7 +3,7 @@ license: cc-by-4.0
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
- - squad
7
  model-index:
8
  - name: tinyroberta-squad2-finetuned-squad
9
  results: []
@@ -14,9 +14,9 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # tinyroberta-squad2-finetuned-squad
16
 
17
- This model is a fine-tuned version of [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2) on the squad dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 1.8429
20
 
21
  ## Model description
22
 
@@ -36,8 +36,8 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 2e-05
39
- - train_batch_size: 32
40
- - eval_batch_size: 32
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
@@ -45,15 +45,15 @@ The following hyperparameters were used during training:
45
 
46
  ### Training results
47
 
48
- | Training Loss | Epoch | Step | Validation Loss |
49
- |:-------------:|:-----:|:----:|:---------------:|
50
- | 2.0827 | 1.0 | 2767 | 1.9840 |
51
- | 1.797 | 2.0 | 5534 | 1.8429 |
52
 
53
 
54
  ### Framework versions
55
 
56
- - Transformers 4.19.4
57
  - Pytorch 1.11.0+cu113
58
  - Datasets 2.3.2
59
  - Tokenizers 0.12.1
 
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
+ - squad_v2
7
  model-index:
8
  - name: tinyroberta-squad2-finetuned-squad
9
  results: []
 
14
 
15
  # tinyroberta-squad2-finetuned-squad
16
 
17
+ This model is a fine-tuned version of [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2) on the squad_v2 dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 1.1592
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 2e-05
39
+ - train_batch_size: 16
40
+ - eval_batch_size: 16
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
 
45
 
46
  ### Training results
47
 
48
+ | Training Loss | Epoch | Step | Validation Loss |
49
+ |:-------------:|:-----:|:-----:|:---------------:|
50
+ | 0.6185 | 1.0 | 8239 | 0.9460 |
51
+ | 0.4243 | 2.0 | 16478 | 1.1592 |
52
 
53
 
54
  ### Framework versions
55
 
56
+ - Transformers 4.20.0
57
  - Pytorch 1.11.0+cu113
58
  - Datasets 2.3.2
59
  - Tokenizers 0.12.1