Edit model card

scream_tertius_dropout_replicate_test7a

This model is a fine-tuned version of openai/whisper-small on the NbAiLab/NCC_speech_all_v5 dataset. It achieves the following results on the evaluation set:

  • step: 19999
  • eval_loss: 1.1799
  • train_loss: 0.3145
  • eval_wer: 11.6626
  • eval_cer: 5.8907

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • lr_scheduler_type: linear
  • per_device_train_batch_size: 32
  • total_train_batch_size_per_node: 128
  • total_train_batch_size: 1024
  • total_optimization_steps: 20,000
  • starting_optimization_step: None
  • finishing_optimization_step: 20,000
  • num_train_dataset_workers: 32
  • num_hosts: 8
  • total_num_training_examples: 20,480,000
  • steps_per_epoch: 1314
  • num_beams: 5
  • dropout: True
  • dropout_probability: 0.1

Training results

step eval_loss train_loss eval_wer eval_cer
0 1.3582 7.9231 169.1230 127.5435
1000 0.9203 0.9748 24.0256 9.2618
2000 0.9951 0.6747 18.7576 7.4326
3000 1.1073 0.5495 16.7479 7.1000
4000 1.1093 0.4612 14.4336 6.4147
5000 1.1719 0.4326 14.1900 6.2837
6000 1.2627 0.3998 12.8197 5.9814
7000 1.2785 0.3765 12.7893 6.1476
8000 1.1395 0.3869 12.5152 6.0519
9000 1.2327 0.3616 12.7893 6.1829
10000 1.0855 0.3620 11.4495 5.6790
11000 1.1018 0.3453 11.7540 5.7848
12000 0.9953 0.3486 11.7235 5.7294
13000 1.1321 0.3365 12.0280 6.0015
14000 1.2654 0.3335 11.6322 5.8050
15000 1.2149 0.3061 11.8453 5.8503
16000 1.1539 0.3090 11.9367 5.8503
17000 1.2530 0.3103 11.7540 5.8251
18000 1.1925 0.3209 11.4799 5.6790
19000 1.2155 0.2931 11.7235 5.9562
19999 1.1799 0.3145 11.6626 5.8907

Framework versions

  • Transformers 4.29.0.dev0
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.