trip-fontaine
commited on
Commit
•
ae5214a
1
Parent(s):
85e88e4
update readme
Browse files
README.md
CHANGED
@@ -79,7 +79,7 @@ Distil-Whisper for English Automatic Speech Recognition (ASR) was proposed in th
|
|
79 |
|
80 |
This is the knowledge distilled version of OpenAI's [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3) for French ASR.
|
81 |
|
82 |
-
The result is a distilled model that performs within **2% WER of [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3)** on out-of-distribution evaluation sets for both short-form and long form transcription. Moreover, it is **5.9x** faster than [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3) and **1.3** times faster than the tiniest version of whisper while being
|
83 |
|
84 |
| Model | Params (M) | Rel. Latency | Short-Form WER | Long-Form WER |
|
85 |
| :--------------------- | :--------: | :----------: | :------------: | :-----------: |
|
|
|
79 |
|
80 |
This is the knowledge distilled version of OpenAI's [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3) for French ASR.
|
81 |
|
82 |
+
The result is a distilled model that performs within **2% WER of [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3)** on out-of-distribution evaluation sets for both short-form and long form transcription. Moreover, it is **5.9x** faster than [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3) and **1.3** times faster than the tiniest version of whisper while being incomparably more accurate.
|
83 |
|
84 |
| Model | Params (M) | Rel. Latency | Short-Form WER | Long-Form WER |
|
85 |
| :--------------------- | :--------: | :----------: | :------------: | :-----------: |
|