Edit model card

DeCRED_small_cv_2

This model is a fine-tuned version of on the common_voice_13_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0669
  • Cer: 0.0663
  • Wer: 0.1563
  • Mer: 0.1532
  • Wil: 0.2546
  • Wip: 0.7454
  • Hits: 128002
  • Substitutions: 17367
  • Deletions: 2812
  • Insertions: 2975

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.002
  • train_batch_size: 128
  • eval_batch_size: 64
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • total_train_batch_size: 256
  • total_eval_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 10000
  • num_epochs: 50.0

Training results

Training Loss Epoch Step Cer Deletions Hits Insertions Validation Loss Mer Substitutions Wer Wil Wip
1.6144 5.0 14885 0.1526 5701 107636 6856 1.4427 0.3057 34844 0.3199 0.4765 0.5235
1.5552 6.0 17862 0.1438 5401 109845 6588 1.3968 0.2903 32935 0.3032 0.4549 0.5451
1.5102 7.0 20839 0.1327 5227 112073 5890 1.3578 0.2726 30881 0.2834 0.4305 0.5695
1.4504 8.0 23816 0.1256 4512 113841 5758 1.3262 0.2605 29828 0.2706 0.4147 0.5853
1.4098 9.0 26793 0.1195 4580 115426 5441 1.2918 0.2486 28175 0.2578 0.3967 0.6033
1.3717 10.0 29770 0.1146 4666 116052 4869 1.2777 0.2417 27463 0.2497 0.3875 0.6125
1.3573 11.0 32747 0.1123 4193 117058 5268 1.2628 0.2372 26930 0.2456 0.3804 0.6196
1.3433 12.0 35724 0.1085 4814 117567 4293 1.2455 0.2289 25800 0.2356 0.3683 0.6317
1.3281 13.0 38701 0.1068 3789 118972 5402 1.2333 0.2254 25420 0.2336 0.3623 0.6377
1.3068 14.0 41678 0.1019 4076 119434 4622 1.2159 0.2184 24671 0.2252 0.3527 0.6473
1.2847 15.0 44655 0.1017 4042 119608 4683 1.2081 0.2176 24531 0.2244 0.3513 0.6487
1.2753 16.0 47632 0.1007 4211 119928 4304 1.2023 0.2135 24042 0.2197 0.3454 0.6546
1.2793 17.0 50609 0.0950 3660 121093 4365 1.1862 0.2062 23428 0.2123 0.3354 0.6646
1.2676 18.0 53586 0.0927 3813 121198 4207 1.1843 0.2047 23170 0.2105 0.3328 0.6672
1.2256 19.0 56563 0.0936 3948 121257 4033 1.1795 0.2034 22976 0.2089 0.3308 0.6692
1.2238 20.0 59540 0.0932 3634 121864 4372 1.1736 0.2012 22683 0.2071 0.3270 0.6730
1.2206 21.0 62517 0.0892 3862 122333 3732 1.1623 0.1947 21986 0.1996 0.3178 0.6822
1.2018 22.0 65494 0.0893 4037 122051 3703 1.1614 0.1964 22093 0.2013 0.3200 0.6800
1.1791 23.0 68471 1.1510 0.0868 0.1953 0.1906 0.3114 0.6886 122943 21479 3759 3708
1.1958 24.0 71448 1.1438 0.0855 0.1931 0.1883 0.3078 0.6922 123356 21215 3610 3784
1.1672 25.0 74425 1.1420 0.0863 0.1940 0.1891 0.3088 0.6912 123289 21266 3626 3861
1.1595 26.0 77402 1.1358 0.0843 0.1898 0.1852 0.3026 0.6974 123784 20743 3654 3735
1.1803 27.0 80379 1.1343 0.0838 0.1901 0.1856 0.3041 0.6959 123595 20966 3620 3580
1.1488 28.0 83356 1.1262 0.0810 0.1855 0.1809 0.2972 0.7028 124441 20511 3229 3746
1.1303 29.0 86333 1.1233 0.0801 0.1837 0.1793 0.2946 0.7054 124600 20302 3279 3636
1.1266 30.0 89310 1.1203 0.0791 0.1818 0.1777 0.2918 0.7082 124687 20007 3487 3447
1.14 31.0 92287 1.1179 0.0790 0.1813 0.1769 0.2905 0.7095 124938 19925 3318 3616
1.1151 32.0 95264 1.1115 0.0776 0.1794 0.1752 0.2885 0.7115 125137 19847 3197 3534
1.1043 33.0 98241 1.1080 0.0773 0.1785 0.1744 0.2866 0.7134 125253 19624 3304 3522
1.1157 34.0 101218 1.1039 0.0762 0.1750 0.1710 0.2817 0.7183 125705 19302 3174 3458
1.0911 35.0 104195 1.1004 0.0747 0.1740 0.1700 0.2800 0.7200 125869 19160 3152 3466
1.0722 36.0 107172 1.0978 0.0743 0.1719 0.1684 0.2776 0.7224 125819 18952 3410 3111
1.092 37.0 110149 1.0953 0.0742 0.1714 0.1676 0.2763 0.7237 126142 18878 3161 3362
1.0763 38.0 113126 1.0914 0.0722 0.1686 0.1651 0.2726 0.7274 126377 18617 3187 3181
1.0667 39.0 116103 1.0918 0.0729 0.1690 0.1654 0.2728 0.7272 126366 18602 3213 3224
1.0651 40.0 119080 1.0845 0.0718 0.1662 0.1627 0.2690 0.7310 126749 18373 3059 3191
1.0761 41.0 122057 1.0836 0.0703 0.1648 0.1614 0.2673 0.7327 126911 18271 2999 3156
1.0509 42.0 125034 1.0828 0.0709 0.1647 0.1615 0.2670 0.7330 126714 18177 3290 2942
1.0409 43.0 128011 1.0798 0.0707 0.1640 0.1607 0.2658 0.7342 126946 18103 3132 3070
1.0525 44.0 130988 1.0760 0.0688 0.1608 0.1575 0.2614 0.7386 127451 17870 2860 3100
1.0359 45.0 133965 1.0745 0.0680 0.1601 0.1568 0.2602 0.7398 127571 17771 2839 3115
1.0144 46.0 136942 1.0738 0.0681 0.1607 0.1574 0.2614 0.7386 127503 17888 2790 3139
1.054 47.0 139919 1.0691 0.0672 0.1586 0.1554 0.2578 0.7422 127745 17575 2861 3062
1.0427 48.0 142896 1.0681 0.0667 0.1573 0.1542 0.2562 0.7438 127851 17473 2857 2981
1.0067 49.0 145873 1.0682 0.0668 0.1568 0.1537 0.2553 0.7447 127906 17401 2874 2957
1.0054 50.0 148850 1.0669 0.0663 0.1563 0.1532 0.2546 0.7454 128002 17367 2812 2975

Framework versions

  • Transformers 4.40.0.dev0
  • Pytorch 2.2.0+rocm5.6
  • Datasets 2.18.0
  • Tokenizers 0.15.2

Wandb run

https://wandb.ai/butspeechfit/decred_commonvoice_en/runs/DeCRED_small_cv_2_continue

Downloads last month
7
Safetensors
Model size
36M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Lakoc/DeCRED_small_cv_2

Finetunes
3 models