asahi417 commited on
Commit
e3a0cf6
1 Parent(s): b224dfd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -55,9 +55,10 @@ We measure the inference speed of different kotoba-whisper-v2.0 implementations
55
  |audio 4 | 5.6 | 35 | 126 | 69 |
56
 
57
  Scripts to re-run the experiment can be found bellow:
58
- * [whisper.cpp](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0-ggml/blob/main/benchmark.sh)
59
- * [faster-whisper](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0-faster/blob/main/benchmark.sh)
60
- * [hf pipeline](https://huggingface.co/kotoba-tech/kotoba-whisper-v2.0/blob/main/benchmark.sh)
 
61
  Also, currently whisper.cpp and faster-whisper support the [sequential long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#sequential-long-form),
62
  and only Huggingface pipeline supports the [chunked long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#chunked-long-form), which we empirically
63
  found better than the sequnential long-form decoding.
 
55
  |audio 4 | 5.6 | 35 | 126 | 69 |
56
 
57
  Scripts to re-run the experiment can be found bellow:
58
+ * [whisper.cpp](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0-ggml/blob/main/benchmark.sh)
59
+ * [faster-whisper](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0-faster/blob/main/benchmark.sh)
60
+ * [hf pipeline](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0/blob/main/benchmark.sh)
61
+ *
62
  Also, currently whisper.cpp and faster-whisper support the [sequential long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#sequential-long-form),
63
  and only Huggingface pipeline supports the [chunked long-form decoding](https://huggingface.co/distil-whisper/distil-large-v3#chunked-long-form), which we empirically
64
  found better than the sequnential long-form decoding.