Joserzapata commited on
Commit
cb5c1a8
1 Parent(s): 11a754e

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -21,7 +21,7 @@ model-index:
21
  metrics:
22
  - name: Wer
23
  type: wer
24
- value: 0.34710743801652894
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,9 +31,9 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 0.6160
35
- - Wer Ortho: 0.3498
36
- - Wer: 0.3471
37
 
38
  ## Model description
39
 
@@ -53,10 +53,10 @@ More information needed
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 1e-05
56
- - train_batch_size: 4
57
- - eval_batch_size: 4
58
  - seed: 42
59
- - gradient_accumulation_steps: 4
60
  - total_train_batch_size: 16
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: constant_with_warmup
@@ -67,7 +67,7 @@ The following hyperparameters were used during training:
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
69
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
70
- | 0.0007 | 17.86 | 500 | 0.6160 | 0.3498 | 0.3471 |
71
 
72
 
73
  ### Framework versions
 
21
  metrics:
22
  - name: Wer
23
  type: wer
24
+ value: 0.33943329397874855
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.6844
35
+ - Wer Ortho: 0.3424
36
+ - Wer: 0.3394
37
 
38
  ## Model description
39
 
 
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 1e-05
56
+ - train_batch_size: 8
57
+ - eval_batch_size: 8
58
  - seed: 42
59
+ - gradient_accumulation_steps: 2
60
  - total_train_batch_size: 16
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: constant_with_warmup
 
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
69
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
70
+ | 0.0006 | 17.86 | 500 | 0.6844 | 0.3424 | 0.3394 |
71
 
72
 
73
  ### Framework versions