ezrab commited on
Commit
0960578
1 Parent(s): 4a4e678

End of training

Browse files
Files changed (3) hide show
  1. README.md +118 -0
  2. generation_config.json +248 -0
  3. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: openai/whisper-tiny
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - PolyAI/minds14
9
+ metrics:
10
+ - wer
11
+ model-index:
12
+ - name: whisper-tiny-minds14
13
+ results:
14
+ - task:
15
+ name: Automatic Speech Recognition
16
+ type: automatic-speech-recognition
17
+ dataset:
18
+ name: PolyAI/minds14
19
+ type: PolyAI/minds14
20
+ config: en-US
21
+ split: train
22
+ args: en-US
23
+ metrics:
24
+ - name: Wer
25
+ type: wer
26
+ value: 0.4964580873671783
27
+ ---
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ # whisper-tiny-minds14
33
+
34
+ This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
35
+ It achieves the following results on the evaluation set:
36
+ - Loss: 0.9507
37
+ - Wer Ortho: 0.4855
38
+ - Wer: 0.4965
39
+
40
+ ## Model description
41
+
42
+ More information needed
43
+
44
+ ## Intended uses & limitations
45
+
46
+ More information needed
47
+
48
+ ## Training and evaluation data
49
+
50
+ More information needed
51
+
52
+ ## Training procedure
53
+
54
+ ### Training hyperparameters
55
+
56
+ The following hyperparameters were used during training:
57
+ - learning_rate: 3e-05
58
+ - train_batch_size: 16
59
+ - eval_batch_size: 16
60
+ - seed: 42
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: constant_with_warmup
63
+ - lr_scheduler_warmup_steps: 50
64
+ - training_steps: 2000
65
+ - mixed_precision_training: Native AMP
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
70
+ |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|
71
+ | 0.7253 | 1.7857 | 50 | 0.5916 | 0.3751 | 0.3583 |
72
+ | 0.158 | 3.5714 | 100 | 0.6146 | 0.3399 | 0.3235 |
73
+ | 0.0244 | 5.3571 | 150 | 0.6676 | 0.3307 | 0.3264 |
74
+ | 0.0127 | 7.1429 | 200 | 0.6988 | 0.3251 | 0.3188 |
75
+ | 0.0065 | 8.9286 | 250 | 0.7239 | 0.3350 | 0.3329 |
76
+ | 0.0023 | 10.7143 | 300 | 0.7469 | 0.3344 | 0.3294 |
77
+ | 0.002 | 12.5 | 350 | 0.7677 | 0.3257 | 0.3217 |
78
+ | 0.0009 | 14.2857 | 400 | 0.7667 | 0.3208 | 0.3182 |
79
+ | 0.0009 | 16.0714 | 450 | 0.8388 | 0.3405 | 0.3388 |
80
+ | 0.0031 | 17.8571 | 500 | 0.7991 | 0.3313 | 0.3323 |
81
+ | 0.0003 | 19.6429 | 550 | 0.8032 | 0.3436 | 0.3406 |
82
+ | 0.0002 | 21.4286 | 600 | 0.8200 | 0.3418 | 0.3400 |
83
+ | 0.001 | 23.2143 | 650 | 0.8118 | 0.3436 | 0.3406 |
84
+ | 0.0005 | 25.0 | 700 | 0.8278 | 0.3344 | 0.3323 |
85
+ | 0.0007 | 26.7857 | 750 | 0.8299 | 0.3356 | 0.3318 |
86
+ | 0.0006 | 28.5714 | 800 | 0.8390 | 0.3344 | 0.3318 |
87
+ | 0.0004 | 30.3571 | 850 | 0.8442 | 0.3350 | 0.3323 |
88
+ | 0.0002 | 32.1429 | 900 | 0.8444 | 0.3307 | 0.3294 |
89
+ | 0.0005 | 33.9286 | 950 | 0.8549 | 0.3344 | 0.3329 |
90
+ | 0.0007 | 35.7143 | 1000 | 0.8515 | 0.3331 | 0.3329 |
91
+ | 0.0003 | 37.5 | 1050 | 0.8571 | 0.3263 | 0.3264 |
92
+ | 0.0005 | 39.2857 | 1100 | 0.8504 | 0.3307 | 0.3294 |
93
+ | 0.0001 | 41.0714 | 1150 | 0.8654 | 0.3313 | 0.3318 |
94
+ | 0.0005 | 42.8571 | 1200 | 0.8724 | 0.3337 | 0.3347 |
95
+ | 0.0001 | 44.6429 | 1250 | 0.8806 | 0.3325 | 0.3341 |
96
+ | 0.0001 | 46.4286 | 1300 | 0.8901 | 0.3344 | 0.3359 |
97
+ | 0.0001 | 48.2143 | 1350 | 0.8941 | 0.3344 | 0.3359 |
98
+ | 0.0001 | 50.0 | 1400 | 0.8987 | 0.3337 | 0.3353 |
99
+ | 0.0 | 51.7857 | 1450 | 0.9018 | 0.3337 | 0.3359 |
100
+ | 0.0 | 53.5714 | 1500 | 0.9073 | 0.3325 | 0.3353 |
101
+ | 0.0 | 55.3571 | 1550 | 0.9106 | 0.3319 | 0.3347 |
102
+ | 0.0 | 57.1429 | 1600 | 0.9152 | 0.3319 | 0.3347 |
103
+ | 0.0 | 58.9286 | 1650 | 0.9198 | 0.4824 | 0.4917 |
104
+ | 0.0 | 60.7143 | 1700 | 0.9242 | 0.4824 | 0.4923 |
105
+ | 0.0 | 62.5 | 1750 | 0.9279 | 0.4849 | 0.4947 |
106
+ | 0.0 | 64.2857 | 1800 | 0.9327 | 0.4855 | 0.4953 |
107
+ | 0.0 | 66.0714 | 1850 | 0.9374 | 0.4849 | 0.4953 |
108
+ | 0.0 | 67.8571 | 1900 | 0.9417 | 0.4855 | 0.4965 |
109
+ | 0.0 | 69.6429 | 1950 | 0.9461 | 0.4855 | 0.4965 |
110
+ | 0.0 | 71.4286 | 2000 | 0.9507 | 0.4855 | 0.4965 |
111
+
112
+
113
+ ### Framework versions
114
+
115
+ - Transformers 4.44.2
116
+ - Pytorch 2.4.1+cu121
117
+ - Datasets 3.0.1
118
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alignment_heads": [
3
+ [
4
+ 2,
5
+ 2
6
+ ],
7
+ [
8
+ 3,
9
+ 0
10
+ ],
11
+ [
12
+ 3,
13
+ 2
14
+ ],
15
+ [
16
+ 3,
17
+ 3
18
+ ],
19
+ [
20
+ 3,
21
+ 4
22
+ ],
23
+ [
24
+ 3,
25
+ 5
26
+ ]
27
+ ],
28
+ "begin_suppress_tokens": [
29
+ 220,
30
+ 50257
31
+ ],
32
+ "bos_token_id": 50257,
33
+ "decoder_start_token_id": 50258,
34
+ "eos_token_id": 50257,
35
+ "forced_decoder_ids": [
36
+ [
37
+ 1,
38
+ null
39
+ ],
40
+ [
41
+ 2,
42
+ 50359
43
+ ]
44
+ ],
45
+ "is_multilingual": true,
46
+ "lang_to_id": {
47
+ "<|af|>": 50327,
48
+ "<|am|>": 50334,
49
+ "<|ar|>": 50272,
50
+ "<|as|>": 50350,
51
+ "<|az|>": 50304,
52
+ "<|ba|>": 50355,
53
+ "<|be|>": 50330,
54
+ "<|bg|>": 50292,
55
+ "<|bn|>": 50302,
56
+ "<|bo|>": 50347,
57
+ "<|br|>": 50309,
58
+ "<|bs|>": 50315,
59
+ "<|ca|>": 50270,
60
+ "<|cs|>": 50283,
61
+ "<|cy|>": 50297,
62
+ "<|da|>": 50285,
63
+ "<|de|>": 50261,
64
+ "<|el|>": 50281,
65
+ "<|en|>": 50259,
66
+ "<|es|>": 50262,
67
+ "<|et|>": 50307,
68
+ "<|eu|>": 50310,
69
+ "<|fa|>": 50300,
70
+ "<|fi|>": 50277,
71
+ "<|fo|>": 50338,
72
+ "<|fr|>": 50265,
73
+ "<|gl|>": 50319,
74
+ "<|gu|>": 50333,
75
+ "<|haw|>": 50352,
76
+ "<|ha|>": 50354,
77
+ "<|he|>": 50279,
78
+ "<|hi|>": 50276,
79
+ "<|hr|>": 50291,
80
+ "<|ht|>": 50339,
81
+ "<|hu|>": 50286,
82
+ "<|hy|>": 50312,
83
+ "<|id|>": 50275,
84
+ "<|is|>": 50311,
85
+ "<|it|>": 50274,
86
+ "<|ja|>": 50266,
87
+ "<|jw|>": 50356,
88
+ "<|ka|>": 50329,
89
+ "<|kk|>": 50316,
90
+ "<|km|>": 50323,
91
+ "<|kn|>": 50306,
92
+ "<|ko|>": 50264,
93
+ "<|la|>": 50294,
94
+ "<|lb|>": 50345,
95
+ "<|ln|>": 50353,
96
+ "<|lo|>": 50336,
97
+ "<|lt|>": 50293,
98
+ "<|lv|>": 50301,
99
+ "<|mg|>": 50349,
100
+ "<|mi|>": 50295,
101
+ "<|mk|>": 50308,
102
+ "<|ml|>": 50296,
103
+ "<|mn|>": 50314,
104
+ "<|mr|>": 50320,
105
+ "<|ms|>": 50282,
106
+ "<|mt|>": 50343,
107
+ "<|my|>": 50346,
108
+ "<|ne|>": 50313,
109
+ "<|nl|>": 50271,
110
+ "<|nn|>": 50342,
111
+ "<|no|>": 50288,
112
+ "<|oc|>": 50328,
113
+ "<|pa|>": 50321,
114
+ "<|pl|>": 50269,
115
+ "<|ps|>": 50340,
116
+ "<|pt|>": 50267,
117
+ "<|ro|>": 50284,
118
+ "<|ru|>": 50263,
119
+ "<|sa|>": 50344,
120
+ "<|sd|>": 50332,
121
+ "<|si|>": 50322,
122
+ "<|sk|>": 50298,
123
+ "<|sl|>": 50305,
124
+ "<|sn|>": 50324,
125
+ "<|so|>": 50326,
126
+ "<|sq|>": 50317,
127
+ "<|sr|>": 50303,
128
+ "<|su|>": 50357,
129
+ "<|sv|>": 50273,
130
+ "<|sw|>": 50318,
131
+ "<|ta|>": 50287,
132
+ "<|te|>": 50299,
133
+ "<|tg|>": 50331,
134
+ "<|th|>": 50289,
135
+ "<|tk|>": 50341,
136
+ "<|tl|>": 50348,
137
+ "<|tr|>": 50268,
138
+ "<|tt|>": 50351,
139
+ "<|uk|>": 50280,
140
+ "<|ur|>": 50290,
141
+ "<|uz|>": 50337,
142
+ "<|vi|>": 50278,
143
+ "<|yi|>": 50335,
144
+ "<|yo|>": 50325,
145
+ "<|zh|>": 50260
146
+ },
147
+ "max_initial_timestamp_index": 50,
148
+ "max_length": 448,
149
+ "no_timestamps_token_id": 50363,
150
+ "pad_token_id": 50257,
151
+ "prev_sot_token_id": 50361,
152
+ "return_timestamps": false,
153
+ "suppress_tokens": [
154
+ 1,
155
+ 2,
156
+ 7,
157
+ 8,
158
+ 9,
159
+ 10,
160
+ 14,
161
+ 25,
162
+ 26,
163
+ 27,
164
+ 28,
165
+ 29,
166
+ 31,
167
+ 58,
168
+ 59,
169
+ 60,
170
+ 61,
171
+ 62,
172
+ 63,
173
+ 90,
174
+ 91,
175
+ 92,
176
+ 93,
177
+ 359,
178
+ 503,
179
+ 522,
180
+ 542,
181
+ 873,
182
+ 893,
183
+ 902,
184
+ 918,
185
+ 922,
186
+ 931,
187
+ 1350,
188
+ 1853,
189
+ 1982,
190
+ 2460,
191
+ 2627,
192
+ 3246,
193
+ 3253,
194
+ 3268,
195
+ 3536,
196
+ 3846,
197
+ 3961,
198
+ 4183,
199
+ 4667,
200
+ 6585,
201
+ 6647,
202
+ 7273,
203
+ 9061,
204
+ 9383,
205
+ 10428,
206
+ 10929,
207
+ 11938,
208
+ 12033,
209
+ 12331,
210
+ 12562,
211
+ 13793,
212
+ 14157,
213
+ 14635,
214
+ 15265,
215
+ 15618,
216
+ 16553,
217
+ 16604,
218
+ 18362,
219
+ 18956,
220
+ 20075,
221
+ 21675,
222
+ 22520,
223
+ 26130,
224
+ 26161,
225
+ 26435,
226
+ 28279,
227
+ 29464,
228
+ 31650,
229
+ 32302,
230
+ 32470,
231
+ 36865,
232
+ 42863,
233
+ 47425,
234
+ 49870,
235
+ 50254,
236
+ 50258,
237
+ 50358,
238
+ 50359,
239
+ 50360,
240
+ 50361,
241
+ 50362
242
+ ],
243
+ "task_to_id": {
244
+ "transcribe": 50359,
245
+ "translate": 50358
246
+ },
247
+ "transformers_version": "4.44.2"
248
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:46629e30ba973845c37a64277eee6ebfadd5070a5d3a3280af9b7516c8b1a8bf
3
  size 151061672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90eae979ef62370be3ba14d511eeb09f10eb7a76c721ea1114042eff489ba45e
3
  size 151061672