Automatic Speech Recognition
Malayalam
ctranslate2
audio
vegam
kurianbenoy commited on
Commit
417e8f7
·
1 Parent(s): 208ec53

update README

Browse files
Files changed (1) hide show
  1. README.md +97 -0
README.md CHANGED
@@ -1,3 +1,100 @@
1
  ---
 
 
 
 
 
2
  license: mit
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - ml
4
+ tags:
5
+ - audio
6
+ - automatic-speech-recognition
7
  license: mit
8
+ datasets:
9
+ - google/fleurs
10
+ - thennal/IMaSC
11
+ - mozilla-foundation/common_voice_11_0
12
+ library_name: ctranslate2
13
  ---
14
+
15
+ # vegam-whipser-medium-ml-int8 (വേഗം)
16
+
17
+ > This just support int8 only.
18
+
19
+ This is a conversion of [thennal/whisper-medium-ml](https://huggingface.co/thennal/whisper-medium-ml) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
20
+
21
+ This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
22
+
23
+ ## Installation
24
+
25
+ - Install [faster-whisper](https://github.com/guillaumekln/faster-whisper). More details about installation can be [found here in faster-whisper](https://github.com/guillaumekln/faster-whisper/tree/master#installation).
26
+
27
+ ```
28
+ pip install faster-whisper
29
+ ```
30
+
31
+ - Install [git-lfs](https://git-lfs.com/) for using this project. Note that git-lfs is just for downloading model from hugging-face.
32
+
33
+ ```
34
+ apt-get install git-lfs
35
+ ```
36
+
37
+ - Download the model weights
38
+
39
+ ```
40
+ git lfs install
41
+ git clone https://huggingface.co/kurianbenoy/vegam-whisper-medium-ml-fp16
42
+ ```
43
+
44
+ ## Usage
45
+
46
+ ```
47
+ from faster_whisper import WhisperModel
48
+
49
+ model_path = "vegam-whisper-medium-ml-fp16"
50
+
51
+ # Run on GPU with FP16
52
+ model = WhisperModel(model_path, device="cuda", compute_type="float16")
53
+
54
+ segments, info = model.transcribe("audio.mp3", beam_size=5)
55
+
56
+ print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
57
+
58
+ for segment in segments:
59
+ print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
60
+ ```
61
+
62
+ ## Example
63
+
64
+ ```
65
+ from faster_whisper import WhisperModel
66
+
67
+ model_path = "vegam-whisper-medium-ml-fp16"
68
+
69
+ model = WhisperModel(model_path, device="cuda", compute_type="float16")
70
+
71
+
72
+ segments, info = model.transcribe("00b38e80-80b8-4f70-babf-566e848879fc.webm", beam_size=5)
73
+
74
+ print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
75
+
76
+ for segment in segments:
77
+ print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
78
+ ```
79
+
80
+ > Detected language 'ta' with probability 0.353516
81
+
82
+ > [0.00s -> 4.74s] പാലം കടുക്കുവോളം നാരായണ പാലം കടന്നാലൊ കൂരായണ
83
+
84
+ Note: The audio file [00b38e80-80b8-4f70-babf-566e848879fc.webm](https://huggingface.co/kurianbenoy/vegam-whisper-medium-ml/blob/main/00b38e80-80b8-4f70-babf-566e848879fc.webm) is from [Malayalam Speech Corpus](https://blog.smc.org.in/malayalam-speech-corpus/) and is stored along with model weights.
85
+ ## Conversion Details
86
+
87
+ This conversion was possible with wonderful [CTranslate2 library](https://github.com/OpenNMT/CTranslate2) leveraging the [Transformers converter for OpenAI Whisper](https://opennmt.net/CTranslate2/guides/transformers.html#whisper).The original model was converted with the following command:
88
+
89
+ ```
90
+ ct2-transformers-converter --model thennal/whisper-medium-ml --output_dir vegam-whisper-medium-ml-fp16 \
91
+ --quantization float16
92
+ ```
93
+
94
+ ## Many Thanks to
95
+
96
+ - Creators of CTranslate2 and faster-whisper
97
+ - Thennal D K
98
+ - Santhosh Thottingal
99
+
100
+