jonatasgrosman commited on
Commit
2175f23
1 Parent(s): 005e305

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - common_voice_11_0
7
+ metrics:
8
+ - wer
9
+ model-index:
10
+ - name: whisper-large-zh-cv11
11
+ results:
12
+ - task:
13
+ name: Automatic Speech Recognition
14
+ type: automatic-speech-recognition
15
+ dataset:
16
+ name: common_voice_11_0
17
+ type: common_voice_11_0
18
+ config: zh-CN
19
+ split: validation[:1000]
20
+ args: zh-CN
21
+ metrics:
22
+ - name: Wer
23
+ type: wer
24
+ value: 52.307692307692314
25
+ ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+ # whisper-large-zh-cv11
31
+
32
+ This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the common_voice_11_0 dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.2501
35
+ - Wer: 52.3077
36
+ - Cer: 8.9573
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 5e-06
56
+ - train_batch_size: 16
57
+ - eval_batch_size: 8
58
+ - seed: 42
59
+ - gradient_accumulation_steps: 2
60
+ - total_train_batch_size: 32
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - lr_scheduler_warmup_steps: 2000
64
+ - training_steps: 20000
65
+ - mixed_precision_training: Native AMP
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
70
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
71
+ | 0.3314 | 0.83 | 1000 | 0.2110 | 65.7014 | 10.8047 |
72
+ | 0.2747 | 1.66 | 2000 | 0.2005 | 58.1900 | 9.4191 |
73
+ | 0.1989 | 2.49 | 3000 | 0.1983 | 56.1991 | 9.0939 |
74
+ | 0.1142 | 3.31 | 4000 | 0.2076 | 55.0226 | 9.1589 |
75
+ | 0.0747 | 4.14 | 5000 | 0.2131 | 56.3801 | 9.0483 |
76
+ | 0.0709 | 4.97 | 6000 | 0.2165 | 54.6606 | 8.9768 |
77
+ | 0.0432 | 5.8 | 7000 | 0.2222 | 54.0271 | 8.9508 |
78
+ | 0.0261 | 6.63 | 8000 | 0.2299 | 54.4796 | 9.0353 |
79
+ | 0.0152 | 7.46 | 9000 | 0.2290 | 52.7602 | 8.8076 |
80
+ | 0.0054 | 8.28 | 10000 | 0.2435 | 51.6742 | 8.5279 |
81
+ | 0.0028 | 9.11 | 11000 | 0.2421 | 53.0317 | 8.9833 |
82
+ | 0.0045 | 9.94 | 12000 | 0.2462 | 52.9412 | 8.7751 |
83
+ | 0.0016 | 10.77 | 13000 | 0.2501 | 52.3077 | 8.9573 |
84
+
85
+
86
+ ### Framework versions
87
+
88
+ - Transformers 4.26.0.dev0
89
+ - Pytorch 1.13.1+cu117
90
+ - Datasets 2.7.1.dev0
91
+ - Tokenizers 0.13.2