wh_4_sun_syl_w_0_lr_2en4_b32_0020

This model is a fine-tuned version of openai/whisper-tiny on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.5573
  • Train Accuracy: 0.0301
  • Train Wermet: 0.1858
  • Train Wermet Syl: 0.3290
  • Validation Loss: 1.1996
  • Validation Accuracy: 0.0206
  • Validation Wermet: 0.3176
  • Validation Wermet Syl: 0.3171
  • Epoch: 19

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.0002, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0}
  • training_precision: float32

Training results

Train Loss Train Accuracy Train Wermet Train Wermet Syl Validation Loss Validation Accuracy Validation Wermet Validation Wermet Syl Epoch
5.2282 0.0106 3.5001 2.7069 4.0241 0.0114 0.9693 0.9528 0
4.7449 0.0116 0.8928 0.8621 3.9408 0.0114 0.9623 0.9431 1
4.7110 0.0116 0.9319 0.9494 4.0066 0.0114 0.9466 0.9714 2
4.6727 0.0117 0.9059 0.9227 3.9101 0.0114 0.9428 0.9156 3
4.6540 0.0117 0.9103 0.9487 3.9216 0.0115 0.9358 0.9594 4
4.6333 0.0117 0.9864 1.1325 3.9306 0.0115 0.9255 0.9484 5
4.6176 0.0117 0.9803 1.1203 3.9175 0.0115 0.9420 0.9530 6
4.5944 0.0118 1.0060 1.1999 3.8546 0.0115 0.9307 0.9259 7
4.5516 0.0119 0.9620 1.1243 3.7716 0.0117 0.8963 0.9090 8
4.3967 0.0123 0.9533 1.1468 3.3131 0.0126 0.8398 0.8514 9
3.6948 0.0142 1.0093 1.2963 2.5045 0.0143 0.7208 0.7981 10
2.7175 0.0174 1.0107 1.4791 1.6944 0.0173 0.5291 0.5397 11
1.9685 0.0207 0.8511 1.3318 1.3460 0.0189 0.4535 0.4901 12
1.5230 0.0231 0.6365 1.0150 1.4574 0.0184 0.4208 0.4469 13
1.1565 0.0254 0.4528 0.7311 1.1631 0.0200 0.3639 0.3688 14
0.9615 0.0268 0.3751 0.6218 1.1632 0.0201 0.3579 0.3572 15
0.8007 0.0281 0.3001 0.5086 1.1379 0.0205 0.3373 0.3423 16
0.6860 0.0291 0.2588 0.4497 1.1096 0.0206 0.3192 0.3128 17
0.5421 0.0304 0.2152 0.3919 1.0932 0.0209 0.3167 0.3123 18
0.5573 0.0301 0.1858 0.3290 1.1996 0.0206 0.3176 0.3171 19

Framework versions

  • Transformers 4.34.0.dev0
  • TensorFlow 2.13.0
  • Tokenizers 0.13.3
Downloads last month
25
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for bigmorning/wh_4_sun_syl_w_0_lr_2en4_b32_0020

Finetuned
(1256)
this model