sanchit-gandhi HF staff commited on
Commit
8e20905
1 Parent(s): 87dc1a9

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -8
README.md CHANGED
@@ -14,9 +14,6 @@ should probably proofread and complete it, then remove this comment. -->
14
  #
15
 
16
  This model was trained from scratch on the librispeech_asr dataset.
17
- It achieves the following results on the evaluation set:
18
- - Loss: 4.2585
19
- - Wer: 1.9681
20
 
21
  ## Model description
22
 
@@ -39,8 +36,8 @@ The following hyperparameters were used during training:
39
  - train_batch_size: 8
40
  - eval_batch_size: 8
41
  - seed: 42
42
- - gradient_accumulation_steps: 4
43
- - total_train_batch_size: 32
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 500
@@ -49,9 +46,6 @@ The following hyperparameters were used during training:
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss | Wer |
53
- |:-------------:|:-----:|:----:|:---------------:|:------:|
54
- | 4.2412 | 0.56 | 500 | 4.2585 | 1.9681 |
55
 
56
 
57
  ### Framework versions
 
14
  #
15
 
16
  This model was trained from scratch on the librispeech_asr dataset.
 
 
 
17
 
18
  ## Model description
19
 
 
36
  - train_batch_size: 8
37
  - eval_batch_size: 8
38
  - seed: 42
39
+ - gradient_accumulation_steps: 8
40
+ - total_train_batch_size: 64
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_steps: 500
 
46
 
47
  ### Training results
48
 
 
 
 
49
 
50
 
51
  ### Framework versions