Configuration Parsing
Warning:
In adapter_config.json: "peft.task_type" must be a string
whisper-large-v2-zh-common-voice-finetuned
This model is a fine-tuned version of openai/whisper-large-v2 on the common_voice_16_1 dataset. It achieves the following results on the evaluation set:
- Loss: 0.1553
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 5
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
7.0025 | 0.1 | 25 | 6.6562 |
3.4915 | 0.2 | 50 | 2.0918 |
1.3912 | 0.3 | 75 | 0.1847 |
0.7276 | 0.4 | 100 | 0.1521 |
0.1706 | 0.5 | 125 | 0.1460 |
0.182 | 0.6 | 150 | 0.1515 |
0.1747 | 0.7 | 175 | 0.1487 |
0.2207 | 0.8 | 200 | 0.1560 |
0.1974 | 0.9 | 225 | 0.1647 |
0.1926 | 1.0 | 250 | 0.1963 |
0.2039 | 1.1 | 275 | 0.2070 |
0.2068 | 1.2 | 300 | 0.2274 |
0.2101 | 1.3 | 325 | 0.2065 |
0.1962 | 1.4 | 350 | 0.2047 |
0.2032 | 1.5 | 375 | 0.2216 |
0.1758 | 1.6 | 400 | 0.2117 |
0.268 | 1.7 | 425 | 0.2224 |
0.2528 | 1.8 | 450 | 0.1976 |
0.263 | 1.9 | 475 | 0.2013 |
0.2271 | 2.0 | 500 | 0.2091 |
0.1268 | 2.1 | 525 | 0.2068 |
0.1629 | 2.2 | 550 | 0.2017 |
0.1226 | 2.3 | 575 | 0.2111 |
0.1329 | 2.4 | 600 | 0.1984 |
0.1441 | 2.5 | 625 | 0.1918 |
0.1505 | 2.6 | 650 | 0.1870 |
0.1135 | 2.7 | 675 | 0.1804 |
0.1307 | 2.8 | 700 | 0.1863 |
0.1246 | 2.9 | 725 | 0.1801 |
0.1313 | 3.0 | 750 | 0.1815 |
0.0625 | 3.1 | 775 | 0.1816 |
0.0656 | 3.2 | 800 | 0.1754 |
0.0634 | 3.3 | 825 | 0.1826 |
0.0626 | 3.4 | 850 | 0.1783 |
0.0571 | 3.5 | 875 | 0.1788 |
0.0615 | 3.6 | 900 | 0.1711 |
0.0544 | 3.7 | 925 | 0.1597 |
0.0675 | 3.8 | 950 | 0.1727 |
0.0468 | 3.9 | 975 | 0.1741 |
0.0593 | 4.0 | 1000 | 0.1586 |
0.0212 | 4.1 | 1025 | 0.1595 |
0.0236 | 4.2 | 1050 | 0.1611 |
0.0211 | 4.3 | 1075 | 0.1599 |
0.0161 | 4.4 | 1100 | 0.1606 |
0.0137 | 4.5 | 1125 | 0.1637 |
0.0247 | 4.6 | 1150 | 0.1625 |
0.0146 | 4.7 | 1175 | 0.1588 |
0.0118 | 4.8 | 1200 | 0.1567 |
0.0151 | 4.9 | 1225 | 0.1559 |
0.0146 | 5.0 | 1250 | 0.1553 |
Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for Allen1984/whisper-large-v2-zh-common-voice-finetuned
Base model
openai/whisper-large-v2