jjyaoao commited on
Commit
0b102a7
1 Parent(s): ea634b9

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -12
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
  datasets:
@@ -7,7 +8,7 @@ datasets:
7
  metrics:
8
  - wer
9
  model-index:
10
- - name: whisper-tiny-en
11
  results:
12
  - task:
13
  name: Automatic Speech Recognition
@@ -16,7 +17,7 @@ model-index:
16
  name: PolyAI/minds14
17
  type: PolyAI/minds14
18
  config: en-US
19
- split: train[450:]
20
  args: en-US
21
  metrics:
22
  - name: Wer
@@ -27,13 +28,13 @@ model-index:
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
  should probably proofread and complete it, then remove this comment. -->
29
 
30
- # whisper-tiny-en
31
 
32
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 0.0007
35
- - Wer Ortho: 0.3479333744602097
36
- - Wer: 0.3447461629279811
37
 
38
  ## Model description
39
 
@@ -53,11 +54,9 @@ More information needed
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 1e-05
56
- - train_batch_size: 8
57
- - eval_batch_size: 4
58
  - seed: 42
59
- - gradient_accumulation_steps: 2
60
- - total_train_batch_size: 16
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: constant_with_warmup
63
  - lr_scheduler_warmup_steps: 50
@@ -67,8 +66,7 @@ The following hyperparameters were used during training:
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
69
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
70
- | 0.0007 | 17.86 | 500 | 0.6448453664779663 | 0.3479333744602097 | 0.3447461629279811 |
71
-
72
 
73
 
74
  ### Framework versions
 
1
  ---
2
  license: apache-2.0
3
+ base_model: openai/whisper-tiny
4
  tags:
5
  - generated_from_trainer
6
  datasets:
 
8
  metrics:
9
  - wer
10
  model-index:
11
+ - name: whisper-tiny-En
12
  results:
13
  - task:
14
  name: Automatic Speech Recognition
 
17
  name: PolyAI/minds14
18
  type: PolyAI/minds14
19
  config: en-US
20
+ split: train
21
  args: en-US
22
  metrics:
23
  - name: Wer
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
  should probably proofread and complete it, then remove this comment. -->
30
 
31
+ # whisper-tiny-En
32
 
33
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.6448
36
+ - Wer Ortho: 0.3479
37
+ - Wer: 0.3447
38
 
39
  ## Model description
40
 
 
54
 
55
  The following hyperparameters were used during training:
56
  - learning_rate: 1e-05
57
+ - train_batch_size: 16
58
+ - eval_batch_size: 16
59
  - seed: 42
 
 
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: constant_with_warmup
62
  - lr_scheduler_warmup_steps: 50
 
66
 
67
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
68
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
69
+ | 0.0007 | 17.86 | 500 | 0.6448 | 0.3479 | 0.3447 |
 
70
 
71
 
72
  ### Framework versions