Tanel commited on
Commit
6759493
·
1 Parent(s): 15f933e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -39,3 +39,60 @@ model-index:
39
  type: cer
40
  value: 3.194
41
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  type: cer
40
  value: 3.194
41
  ---
42
+
43
+
44
+ # Whisper-medium-et
45
+
46
+ This is a Whisper-medium model [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) finetuned on around 800 hours of diverse Estonian data.
47
+
48
+ ## Model description
49
+ This is a general-purpose Estonian ASR model trained in the Lab of Language Technology at TalTech.
50
+
51
+
52
+ ## Intended uses & limitations
53
+
54
+ This model is intended for general-purpose speech recognition, such as broadcast conversations, interviews, talks, etc.
55
+
56
+ ## How to use
57
+
58
+ Use as any other Whisper model via HF transformers, or use a faster decoder like [faster-whisper](https://github.com/guillaumekln/faster-whisper).
59
+
60
+
61
+ #### Limitations and bias
62
+
63
+ Since this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:
64
+ * Speech containing technical and other domain-specific terms
65
+ * Children's speech
66
+ * Non-native speech
67
+ * Speech recorded under very noisy conditions or with a microphone far from the speaker
68
+ * Very spontaneous and overlapping speech
69
+
70
+ ## Training data
71
+ Acoustic training data:
72
+
73
+ | Type | Amount (h) |
74
+ |-----------------------|:------:|
75
+ | Broadcast speech | 591 |
76
+ | Spontaneous speech | 53 |
77
+ | Elderly speech corpus | 53 |
78
+ | Talks, lectures | 49 |
79
+ | Parliament speeches | 31 |
80
+ | *Total* | *761* |
81
+
82
+
83
+
84
+ ## Training procedure
85
+
86
+ Finetuned using Espnet, and then comverted to transformers format using [this](https://gist.github.com/alumae/2dcf473b667cec9d513b80ea24e94672) script.
87
+ Finetuning procedure is similar to [this](https://huggingface.co/espnet/shihlun_asr_whisper_medium_finetuned_librispeech100) model.
88
+
89
+ ## Evaluation results
90
+
91
+ ### WER
92
+
93
+ WER results below are obtained using greedy decoding (i.e., beam size 1).
94
+
95
+ |Dataset | WER |
96
+ |---|---|
97
+ | Common Voice 8.0 | 13.8 |
98
+ | Common Voice 11.0 | 14.7 |