Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ wget https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0-ggml/resolve/main/gg
|
|
32 |
|
33 |
```bash
|
34 |
wget https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0-ggml/resolve/main/sample_ja_speech.wav
|
35 |
-
make -j && ./main -m models/ggml-kotoba-whisper-v2.0.bin -f sample_ja_speech.wav --output-file transcription --output-json
|
36 |
```
|
37 |
|
38 |
Note that it runs only with 16-bit WAV files, so make sure to convert your input before running the tool. For example, you can use ffmpeg like this:
|
@@ -71,7 +71,7 @@ wget https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0-ggml/resolve/main/gg
|
|
71 |
|
72 |
Run inference on the sample audio:
|
73 |
```bash
|
74 |
-
make -j && ./main -m models/ggml-kotoba-whisper-v2.0-q5_0.bin -f sample_ja_speech.wav --output-file transcription.quantized --output-json
|
75 |
```
|
76 |
|
77 |
Note that the benchmark results are almost identical to the raw non-quantized model weight.
|
|
|
32 |
|
33 |
```bash
|
34 |
wget https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0-ggml/resolve/main/sample_ja_speech.wav
|
35 |
+
make -j && ./main -m models/ggml-kotoba-whisper-v2.0.bin -l ja -f sample_ja_speech.wav --output-file transcription --output-json
|
36 |
```
|
37 |
|
38 |
Note that it runs only with 16-bit WAV files, so make sure to convert your input before running the tool. For example, you can use ffmpeg like this:
|
|
|
71 |
|
72 |
Run inference on the sample audio:
|
73 |
```bash
|
74 |
+
make -j && ./main -m models/ggml-kotoba-whisper-v2.0-q5_0.bin -l ja -f sample_ja_speech.wav --output-file transcription.quantized --output-json
|
75 |
```
|
76 |
|
77 |
Note that the benchmark results are almost identical to the raw non-quantized model weight.
|