nb-whisper-large / README.md
pere's picture
Saving weights and logs of step 49999 - epoch 2
b6fe46c
|
raw
history blame
6.4 kB
---
language:
- 'no'
license: apache-2.0
base_model: NbAiLab/nb-whisper-large-v3-RC4
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
model-index:
- name: nb-whisper-large-v0.8-vad3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nb-whisper-large-v0.8-vad3
This model is a fine-tuned version of [NbAiLab/nb-whisper-large-v3-RC4](https://huggingface.co/NbAiLab/nb-whisper-large-v3-RC4) on the NbAiLab/ncc_speech_styling_v2_vad3 dataset.
It achieves the following results on the evaluation set:
- step: 49999
- validation_nst_loss: 0.4292
- train_loss: 0.4893
- validation_nst_wer: 2.2211
- validation_nst_cer: 0.6628
- validation_nst_exact_wer: 2.8145
- validation_nst_exact_cer: 0.7555
- validation_clean_stortinget_no_loss: 0.7534
- validation_clean_stortinget_no_wer: 8.9128
- validation_clean_stortinget_no_cer: 5.6979
- validation_clean_stortinget_no_exact_wer: 11.8159
- validation_clean_stortinget_no_exact_cer: 6.1484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- lr_scheduler_type: linear
- per_device_train_batch_size: 8
- total_train_batch_size_per_node: 32
- total_train_batch_size: 1024
- total_optimization_steps: 15,000
- starting_optimization_step: 35,000
- finishing_optimization_step: 50,000
- num_train_dataset_workers: 32
- num_hosts: 32
- total_num_training_examples: 51,200,000
- steps_per_epoch: 24982
- num_beams: None
- weight_decay: 0.01
- adam_beta1: 0.9
- adam_beta2: 0.98
- adam_epsilon: 1e-06
- dropout: True
- bpe_dropout_probability: 0.2
- activation_dropout_probability: 0.1
### Training results
| step | validation_nst_loss | train_loss | validation_nst_wer | validation_nst_cer | validation_nst_exact_wer | validation_nst_exact_cer | validation_clean_stortinget_no_loss | validation_clean_stortinget_no_wer | validation_clean_stortinget_no_cer | validation_clean_stortinget_no_exact_wer | validation_clean_stortinget_no_exact_cer |
|:-----:|:-------------------:|:----------:|:------------------:|:------------------:|:------------------------:|:------------------------:|:-----------------------------------:|:----------------------------------:|:----------------------------------:|:----------------------------------------:|:----------------------------------------:|
| 0 | 0.4259 | 0.9588 | 2.1721 | 0.6246 | 2.7111 | 0.7079 | 0.6807 | 8.5931 | 5.4608 | 11.4221 | 5.8946 |
| 5000 | 0.4376 | 0.5822 | 2.5859 | 0.7793 | 3.0867 | 0.8563 | 0.6738 | 9.1686 | 5.8478 | 12.0792 | 6.3020 |
| 10000 | 0.4368 | 0.5675 | 2.5913 | 0.7271 | 3.2337 | 0.8269 | 0.6875 | 9.2705 | 5.9200 | 12.1741 | 6.3750 |
| 15000 | 0.4335 | 0.5403 | 2.3409 | 0.6936 | 2.9180 | 0.7821 | 0.7187 | 9.0834 | 5.7344 | 11.9962 | 6.1944 |
| 20000 | 0.4324 | 0.5187 | 2.3518 | 0.6945 | 2.9561 | 0.7857 | 0.7357 | 8.9839 | 5.7154 | 11.8610 | 6.1664 |
| 25000 | 0.4307 | 0.5158 | 2.3028 | 0.6712 | 2.9343 | 0.7711 | 0.7228 | 9.1284 | 5.8704 | 11.9915 | 6.3161 |
| 30000 | 0.4312 | 0.5108 | 2.2810 | 0.6656 | 2.8690 | 0.7564 | 0.7428 | 8.9010 | 5.6726 | 11.8349 | 6.1305 |
| 35000 | 0.4299 | 0.4908 | 2.2320 | 0.6768 | 2.8417 | 0.7729 | 0.7513 | 8.8015 | 5.6123 | 11.6854 | 6.0642 |
| 40000 | 0.4313 | 0.4865 | 2.2973 | 0.6917 | 2.8907 | 0.7839 | 0.7545 | 8.9057 | 5.6912 | 11.8491 | 6.1465 |
| 45000 | 0.4303 | 0.4849 | 2.2429 | 0.6665 | 2.8254 | 0.7564 | 0.7484 | 8.9578 | 5.7320 | 11.8752 | 6.1851 |
| 49999 | 0.4292 | 0.4893 | 2.2211 | 0.6628 | 2.8145 | 0.7555 |
| 49999 | 0.7534 | 0.4893 | 8.9128 | 5.6979 | 11.8159 | 6.1484 |
### Framework versions
- Transformers 4.36.2
- Datasets 2.16.1
- Tokenizers 0.15.0