TieIncred commited on
Commit
637b000
1 Parent(s): c38ce62

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -38
README.md CHANGED
@@ -5,24 +5,9 @@ tags:
5
  - generated_from_trainer
6
  datasets:
7
  - PolyAI/minds14
8
- metrics:
9
- - wer
10
  model-index:
11
  - name: whisper-tiny-enUS
12
- results:
13
- - task:
14
- name: Automatic Speech Recognition
15
- type: automatic-speech-recognition
16
- dataset:
17
- name: PolyAI/minds14
18
- type: PolyAI/minds14
19
- config: en-US
20
- split: train
21
- args: en-US
22
- metrics:
23
- - name: Wer
24
- type: wer
25
- value: 7.431944109853047
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,9 +17,14 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.0000
36
- - Wer Ortho: 7.4972
37
- - Wer: 7.4319
 
 
 
 
 
38
 
39
  ## Model description
40
 
@@ -53,32 +43,20 @@ More information needed
53
  ### Training hyperparameters
54
 
55
  The following hyperparameters were used during training:
56
- - learning_rate: 1e-05
57
  - train_batch_size: 16
58
  - eval_batch_size: 16
59
  - seed: 42
 
 
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
- - lr_scheduler_type: constant_with_warmup
62
- - lr_scheduler_warmup_steps: 50
63
- - training_steps: 4000
64
-
65
- ### Training results
66
-
67
- | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
68
- |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|
69
- | 0.0017 | 14.29 | 500 | 0.0011 | 4.3200 | 4.2159 |
70
- | 0.0003 | 28.57 | 1000 | 0.0005 | 4.4204 | 4.3724 |
71
- | 0.001 | 42.86 | 1500 | 0.0003 | 4.1567 | 4.0954 |
72
- | 0.0001 | 57.14 | 2000 | 0.0001 | 4.3702 | 4.3483 |
73
- | 0.0001 | 71.43 | 2500 | 0.0001 | 7.1958 | 7.1429 |
74
- | 0.0 | 85.71 | 3000 | 0.0000 | 7.5097 | 7.4440 |
75
- | 0.0 | 100.0 | 3500 | 0.0000 | 7.5348 | 7.4681 |
76
- | 0.0 | 114.29 | 4000 | 0.0000 | 7.4972 | 7.4319 |
77
-
78
 
79
  ### Framework versions
80
 
81
- - Transformers 4.32.0.dev0
82
  - Pytorch 2.0.1+cu117
83
  - Datasets 2.13.1
84
  - Tokenizers 0.13.3
 
5
  - generated_from_trainer
6
  datasets:
7
  - PolyAI/minds14
 
 
8
  model-index:
9
  - name: whisper-tiny-enUS
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
19
  It achieves the following results on the evaluation set:
20
+ - eval_loss: 0.6151
21
+ - eval_wer_ortho: 24.3412
22
+ - eval_wer: 0.2421
23
+ - eval_runtime: 9.0197
24
+ - eval_samples_per_second: 12.417
25
+ - eval_steps_per_second: 0.776
26
+ - epoch: 35.71
27
+ - step: 500
28
 
29
  ## Model description
30
 
 
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
+ - learning_rate: 3e-05
47
  - train_batch_size: 16
48
  - eval_batch_size: 16
49
  - seed: 42
50
+ - gradient_accumulation_steps: 2
51
+ - total_train_batch_size: 32
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
+ - lr_scheduler_type: linear
54
+ - lr_scheduler_warmup_steps: 100
55
+ - training_steps: 5000
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
  ### Framework versions
58
 
59
+ - Transformers 4.31.0
60
  - Pytorch 2.0.1+cu117
61
  - Datasets 2.13.1
62
  - Tokenizers 0.13.3