Automatic Speech Recognition
Malayalam
ctranslate2
audio
vegam
kurianbenoy commited on
Commit
894f34d
·
1 Parent(s): 0c5cc9f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -8
README.md CHANGED
@@ -41,7 +41,7 @@ apt-get install git-lfs
41
 
42
  ```
43
  git lfs install
44
- git clone https://huggingface.co/kurianbenoy/vegam-whisper-medium-ml-fp16
45
  ```
46
 
47
  ## Usage
@@ -49,10 +49,9 @@ git clone https://huggingface.co/kurianbenoy/vegam-whisper-medium-ml-fp16
49
  ```
50
  from faster_whisper import WhisperModel
51
 
52
- model_path = "vegam-whisper-medium-ml-fp16"
53
 
54
- # Run on GPU with FP16
55
- model = WhisperModel(model_path, device="cuda", compute_type="float16")
56
 
57
  segments, info = model.transcribe("audio.mp3", beam_size=5)
58
 
@@ -67,9 +66,9 @@ for segment in segments:
67
  ```
68
  from faster_whisper import WhisperModel
69
 
70
- model_path = "vegam-whisper-medium-ml-fp16"
71
 
72
- model = WhisperModel(model_path, device="cuda", compute_type="float16")
73
 
74
 
75
  segments, info = model.transcribe("00b38e80-80b8-4f70-babf-566e848879fc.webm", beam_size=5)
@@ -90,8 +89,8 @@ Note: The audio file [00b38e80-80b8-4f70-babf-566e848879fc.webm](https://hugging
90
  This conversion was possible with wonderful [CTranslate2 library](https://github.com/OpenNMT/CTranslate2) leveraging the [Transformers converter for OpenAI Whisper](https://opennmt.net/CTranslate2/guides/transformers.html#whisper).The original model was converted with the following command:
91
 
92
  ```
93
- ct2-transformers-converter --model thennal/whisper-medium-ml --output_dir vegam-whisper-medium-ml-fp16 \
94
- --quantization float16
95
  ```
96
 
97
  ## Many Thanks to
 
41
 
42
  ```
43
  git lfs install
44
+ git clone https://huggingface.co/kurianbenoy/vegam-whisper-medium-ml-int8
45
  ```
46
 
47
  ## Usage
 
49
  ```
50
  from faster_whisper import WhisperModel
51
 
52
+ model_path = "vegam-whisper-medium-ml-int8"
53
 
54
+ model = WhisperModel(model_path, device="cpu", compute_type="int8")
 
55
 
56
  segments, info = model.transcribe("audio.mp3", beam_size=5)
57
 
 
66
  ```
67
  from faster_whisper import WhisperModel
68
 
69
+ model_path = "vegam-whisper-medium-ml-int8"
70
 
71
+ model = WhisperModel(model_path, device="cpu", compute_type="int8")
72
 
73
 
74
  segments, info = model.transcribe("00b38e80-80b8-4f70-babf-566e848879fc.webm", beam_size=5)
 
89
  This conversion was possible with wonderful [CTranslate2 library](https://github.com/OpenNMT/CTranslate2) leveraging the [Transformers converter for OpenAI Whisper](https://opennmt.net/CTranslate2/guides/transformers.html#whisper).The original model was converted with the following command:
90
 
91
  ```
92
+ ct2-transformers-converter --model thennal/whisper-medium-ml --output_dir vegam-whisper-medium-ml-int8 \
93
+ --quantization int8
94
  ```
95
 
96
  ## Many Thanks to