Update README.md
Browse files
README.md
CHANGED
@@ -55,9 +55,10 @@ We measure the inference speed of different kotoba-whisper-v2.0 implementations
|
|
55 |
|audio 4 | 5.6 | 35 | 126 | 69 |
|
56 |
|
57 |
Scripts to re-run the experiment can be found bellow:
|
58 |
-
* [whisper.cpp](https://huggingface.co/kotoba-tech/kotoba-whisper-
|
59 |
-
* [faster-whisper](https://huggingface.co/kotoba-tech/kotoba-whisper-
|
60 |
-
* [hf pipeline](https://huggingface.co/kotoba-tech/kotoba-whisper-
|
|
|
61 |
Also, currently whisper.cpp and faster-whisper support the [sequential long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#sequential-long-form),
|
62 |
and only Huggingface pipeline supports the [chunked long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#chunked-long-form), which we empirically
|
63 |
found better than the sequnential long-form decoding.
|
|
|
55 |
|audio 4 | 5.6 | 35 | 126 | 69 |
|
56 |
|
57 |
Scripts to re-run the experiment can be found bellow:
|
58 |
+
* [whisper.cpp](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0-ggml/blob/main/benchmark.sh)
|
59 |
+
* [faster-whisper](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0-faster/blob/main/benchmark.sh)
|
60 |
+
* [hf pipeline](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0/blob/main/benchmark.sh)
|
61 |
+
*
|
62 |
Also, currently whisper.cpp and faster-whisper support the [sequential long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#sequential-long-form),
|
63 |
and only Huggingface pipeline supports the [chunked long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#chunked-long-form), which we empirically
|
64 |
found better than the sequnential long-form decoding.
|