Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,310 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language: ja
|
4 |
+
tags:
|
5 |
+
- audio
|
6 |
+
- automatic-speech-recognition
|
7 |
+
- hf-asr-leaderboard
|
8 |
+
widget:
|
9 |
+
- example_title: CommonVoice 8.0 (Test Split)
|
10 |
+
src: >-
|
11 |
+
https://huggingface.co/datasets/japanese-asr/ja_asr.common_voice_8_0/resolve/main/sample.flac
|
12 |
+
- example_title: JSUT Basic 5000
|
13 |
+
src: >-
|
14 |
+
https://huggingface.co/datasets/japanese-asr/ja_asr.jsut_basic5000/resolve/main/sample.flac
|
15 |
+
- example_title: ReazonSpeech (Test Split)
|
16 |
+
src: >-
|
17 |
+
https://huggingface.co/datasets/japanese-asr/ja_asr.reazonspeech_test/resolve/main/sample.flac
|
18 |
+
pipeline_tag: automatic-speech-recognition
|
19 |
+
metrics:
|
20 |
+
- wer
|
21 |
+
model-index:
|
22 |
+
- name: kotoba-tech/kotoba-whisper-v1.0
|
23 |
+
results:
|
24 |
+
- task:
|
25 |
+
type: automatic-speech-recognition
|
26 |
+
dataset:
|
27 |
+
name: CommonVoice_8.0 (Japanese)
|
28 |
+
type: japanese-asr/ja_asr.common_voice_8_0
|
29 |
+
metrics:
|
30 |
+
- name: WER
|
31 |
+
type: WER
|
32 |
+
value: 59.27
|
33 |
+
- name: CER
|
34 |
+
type: CER
|
35 |
+
value: 9.44
|
36 |
+
- task:
|
37 |
+
type: automatic-speech-recognition
|
38 |
+
dataset:
|
39 |
+
name: ReazonSpeech (Test)
|
40 |
+
type: japanese-asr/ja_asr.reazonspeech_test
|
41 |
+
metrics:
|
42 |
+
- name: WER
|
43 |
+
type: WER
|
44 |
+
value: 56.62
|
45 |
+
- name: CER
|
46 |
+
type: CER
|
47 |
+
value: 12.60
|
48 |
+
- task:
|
49 |
+
type: automatic-speech-recognition
|
50 |
+
dataset:
|
51 |
+
name: JSUT Basic5000
|
52 |
+
type: japanese-asr/ja_asr.jsut_basic5000
|
53 |
+
metrics:
|
54 |
+
- name: WER
|
55 |
+
type: WER
|
56 |
+
value: 64.36
|
57 |
+
- name: CER
|
58 |
+
type: CER
|
59 |
+
value: 8.48
|
60 |
+
---
|
61 |
+
|
62 |
+
# Kotoba-Whisper
|
63 |
+
_Kotoba-Whisper_ is a collection of distilled [Whisper](https://arxiv.org/abs/2212.04356) models for Japanese ASR, developed through the collaboration bewteen
|
64 |
+
[Asahi Ushio](https://asahiushio.com) and [Kotoba Technologies](https://twitter.com/kotoba_tech).
|
65 |
+
Following the original work of distil-whisper ([Robust Knowledge Distillation via Large-Scale Pseudo Labelling](https://arxiv.org/abs/2311.00430)),
|
66 |
+
we employ OpenAI's [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3) as the teacher model, and the student model consists the full encoder of the
|
67 |
+
teacher large-v3 model and the decoder with two layers initialized from the first and last layer of the large-v3 model.
|
68 |
+
Kotoba-Whisper is **6.3x faster than large-v3**, while retaining as low error rate as the large-v3.
|
69 |
+
|
70 |
+
As the initial version, we release ***kotoba-whisper-v1.0*** trained on the `large` subset of [ReazonSpeech](https://huggingface.co/datasets/reazon-research/reazonspeech)
|
71 |
+
(the largest speech-transcription paired dataset in Japanese extracted from Japanese TV audio recordings),
|
72 |
+
which amounts 1,253 hours of audio with 16,861,235 characters of transcriptions (5 sec audio with 18 text tokens in average) after
|
73 |
+
those transcriptions more than 10 WER are removed (see [WER Filter](https://huggingface.co/distil-whisper/distil-large-v3#wer-filter) for detail).
|
74 |
+
The model was trained for 8 epochs with batch size 256 with sampling rate of 16kHz, and the training and evaluation code to reproduce kotoba-whisper is available at [https://github.com/kotoba-tech/kotoba-whisper](https://github.com/kotoba-tech/kotoba-whisper).
|
75 |
+
|
76 |
+
Kotoba-whisper-v1.0 achieves better CER and WER than the [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) in the in-domain held-out test set
|
77 |
+
from ReazonSpeech, and achieves competitive CER and WER on the out-of-domain test sets including [JSUT basic 5000](https://sites.google.com/site/shinnosuketakamichi/publication/jsut) and
|
78 |
+
the Japanese subset from [CommonVoice 8.0](https://huggingface.co/datasets/common_voice) (see [Evaluation](#evaluation) for detail).
|
79 |
+
|
80 |
+
- ***CER***
|
81 |
+
|
82 |
+
| Model | CommonVoice 8.0 (Japanese) | JSUT Basic 5000 | ReazonSpeech Test |
|
83 |
+
|:------------------------------------------------------------------------------------------------|---------------------------:|----------------:|------------------:|
|
84 |
+
| [**kotoba-tech/kotoba-whisper-v1.0**](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0) | 9.44 | 8.48 | **12.60** |
|
85 |
+
| [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | **8.52** | **7.18** | 15.18 |
|
86 |
+
| [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) | 11.34 | 9.87 | 29.56 |
|
87 |
+
| [openai/whisper-small](https://huggingface.co/openai/whisper-small) | 15.26 | 14.22 | 34.29 |
|
88 |
+
| [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 46.86 | 35.69 | 96.69 |
|
89 |
+
|
90 |
+
- ***WER***
|
91 |
+
|
92 |
+
| Model | CommonVoice 8.0 (Japanese) | JSUT Basic 5000 | ReazonSpeech Test |
|
93 |
+
|:------------------------------------------------------------------------------------------------|---------------------------:|----------------:|------------------:|
|
94 |
+
| [**kotoba-tech/kotoba-whisper-v1.0**](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0) | 59.27 | 64.36 | **56.62** |
|
95 |
+
| [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | **55.41** | **59.34** | 60.23 |
|
96 |
+
| [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) | 63.64 | 69.52 | 76.04 |
|
97 |
+
| [openai/whisper-small](https://huggingface.co/openai/whisper-small) | 74.21 | 82.02 | 82.99 |
|
98 |
+
| [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 93.78 | 97.72 | 94.85 |
|
99 |
+
|
100 |
+
- ***Latency***: As kotoba-whisper uses the same architecture as [distil-whisper/distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3),
|
101 |
+
it inherits the benefit of the improved latency compared to [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3)
|
102 |
+
(**6.3x faster than large-v3**, see the table below taken from [distil-whisper/distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3)).
|
103 |
+
|
104 |
+
| Model | Params / M | Rel. Latency |
|
105 |
+
|----------------------------------------------------------------------------------------------|------------|--------------|
|
106 |
+
| **[kotoba-tech/kotoba-whisper-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0)**| **756** | **6.3** |
|
107 |
+
| [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | 1550 | 1.0 |
|
108 |
+
|
109 |
+
|
110 |
+
## Transformers Usage
|
111 |
+
Kotoba-Whisper is supported in the Hugging Face 🤗 Transformers library from version 4.39 onwards. To run the model, first
|
112 |
+
install the latest version of Transformers.
|
113 |
+
|
114 |
+
```bash
|
115 |
+
pip install --upgrade pip
|
116 |
+
pip install --upgrade transformers accelerate
|
117 |
+
```
|
118 |
+
|
119 |
+
### Short-Form Transcription
|
120 |
+
The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
|
121 |
+
class to transcribe short-form audio files (< 30-seconds) as follows:
|
122 |
+
|
123 |
+
```python
|
124 |
+
import torch
|
125 |
+
from transformers import pipeline
|
126 |
+
from datasets import load_dataset
|
127 |
+
|
128 |
+
# config
|
129 |
+
model_id = "kotoba-tech/kotoba-whisper-v1.0"
|
130 |
+
torch_dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
|
131 |
+
device = "cuda:0" if torch.cuda.is_available() else "cpu"
|
132 |
+
model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
|
133 |
+
generate_kwargs = {"language": "japanese", "task": "transcribe"}
|
134 |
+
|
135 |
+
# load model
|
136 |
+
pipe = pipeline(
|
137 |
+
"automatic-speech-recognition",
|
138 |
+
model=model_id,
|
139 |
+
torch_dtype=torch_dtype,
|
140 |
+
device=device,
|
141 |
+
model_kwargs=model_kwargs
|
142 |
+
)
|
143 |
+
|
144 |
+
# load sample audio
|
145 |
+
dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test")
|
146 |
+
sample = dataset[0]["audio"]
|
147 |
+
|
148 |
+
# run inference
|
149 |
+
result = pipe(sample, generate_kwargs=generate_kwargs)
|
150 |
+
print(result["text"])
|
151 |
+
```
|
152 |
+
|
153 |
+
- To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline (make sure the audio is sampled in 16kHz):
|
154 |
+
```diff
|
155 |
+
- result = pipe(sample, generate_kwargs=generate_kwargs)
|
156 |
+
+ result = pipe("audio.mp3", generate_kwargs=generate_kwargs)
|
157 |
+
```
|
158 |
+
|
159 |
+
- For segment-level timestamps, pass the argument `return_timestamps=True` and return the `"chunks"` output:
|
160 |
+
```python
|
161 |
+
result = pipe(sample, return_timestamps=True, generate_kwargs=generate_kwargs)
|
162 |
+
print(result["chunks"])
|
163 |
+
```
|
164 |
+
|
165 |
+
***Sequential Long-Form:*** Kotoba-whisper is designed to be compatible with OpenAI's sequential long-form transcription algorithm. This algorithm uses a sliding window for buffered
|
166 |
+
inference of long audio files (> 30-seconds), and returns more accurate transcriptions compared to the [chunked long-form algorithm](#chunked-long-form).
|
167 |
+
As default, if long audio files are passed to the model, it will transcribes with the sequential long-form transcription.
|
168 |
+
The sequential long-form algorithm should be used in either of the following scenarios:
|
169 |
+
|
170 |
+
1. Transcription accuracy is the most important factor, and latency is less of a consideration
|
171 |
+
2. You are transcribing **batches** of long audio files, in which case the latency of sequential is comparable to chunked, while being up to 0.5% WER more accurate
|
172 |
+
|
173 |
+
If you are transcribing single long audio files and latency is the most important factor, you should use the chunked algorithm
|
174 |
+
described [below](#chunked-long-form). For a detailed explanation of the different algorithms, refer to Sections 5 of
|
175 |
+
the [Distil-Whisper paper](https://arxiv.org/pdf/2311.00430.pdf). The [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
|
176 |
+
class can be used to transcribe long audio files with the sequential algorithm as follows:
|
177 |
+
|
178 |
+
|
179 |
+
### Chunked Long-Form
|
180 |
+
This algorithm should be used when a single large audio file is being transcribed and the fastest possible inference is required. In such circumstances,
|
181 |
+
the chunked algorithm is up to 9x faster than OpenAI's sequential long-form implementation (see Table 7 of the [Distil-Whisper paper](https://arxiv.org/pdf/2311.00430.pdf)).
|
182 |
+
To enable chunking, pass the `chunk_length_s` parameter to the `pipeline`. For distil-large-v3, a chunk length of 25-seconds
|
183 |
+
is optimal. To activate batching over long audio files, pass the argument `batch_size`:
|
184 |
+
|
185 |
+
```python
|
186 |
+
import torch
|
187 |
+
from transformers import pipeline
|
188 |
+
from datasets import load_dataset
|
189 |
+
|
190 |
+
# config
|
191 |
+
model_id = "kotoba-tech/kotoba-whisper-v1.0"
|
192 |
+
torch_dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
|
193 |
+
device = "cuda:0" if torch.cuda.is_available() else "cpu"
|
194 |
+
model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
|
195 |
+
generate_kwargs = {"language": "japanese", "task": "transcribe"}
|
196 |
+
|
197 |
+
# load model
|
198 |
+
pipe = pipeline(
|
199 |
+
"automatic-speech-recognition",
|
200 |
+
model=model_id,
|
201 |
+
torch_dtype=torch_dtype,
|
202 |
+
device=device,
|
203 |
+
model_kwargs=model_kwargs,
|
204 |
+
chunk_length_s=15,
|
205 |
+
batch_size=16
|
206 |
+
)
|
207 |
+
|
208 |
+
# load sample audio (concatenate instances to create a long audio)
|
209 |
+
dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test")
|
210 |
+
sample = {"array": np.concatenate([i["array"] for i in dataset[:20]["audio"]]), "sampling_rate": dataset[0]['audio']['sampling_rate']}
|
211 |
+
|
212 |
+
# run inference
|
213 |
+
result = pipe(sample, generate_kwargs=generate_kwargs)
|
214 |
+
print(result["text"])
|
215 |
+
```
|
216 |
+
|
217 |
+
|
218 |
+
### Additional Speed & Memory Improvements
|
219 |
+
You can apply additional speed and memory improvements to further reduce the inference speed and VRAM
|
220 |
+
requirements. These optimisations primarily target the attention kernel, swapping it from an eager implementation to a
|
221 |
+
more efficient flash attention version.
|
222 |
+
|
223 |
+
#### Flash Attention 2
|
224 |
+
|
225 |
+
We recommend using [Flash-Attention 2](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2)
|
226 |
+
if your GPU allows for it. To do so, you first need to install [Flash Attention](https://github.com/Dao-AILab/flash-attention):
|
227 |
+
|
228 |
+
```
|
229 |
+
pip install flash-attn --no-build-isolation
|
230 |
+
```
|
231 |
+
|
232 |
+
Then pass `attn_implementation="flash_attention_2"` to `from_pretrained`:
|
233 |
+
|
234 |
+
```diff
|
235 |
+
- model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
|
236 |
+
+ model_kwargs = {"attn_implementation": "flash_attention_2"} if torch.cuda.is_available() else {}
|
237 |
+
```
|
238 |
+
|
239 |
+
|
240 |
+
## Model Details
|
241 |
+
See [https://huggingface.co/distil-whisper/distil-large-v3#model-details](https://huggingface.co/distil-whisper/distil-large-v3#model-details).
|
242 |
+
|
243 |
+
|
244 |
+
## Evaluation
|
245 |
+
The following code-snippets demonstrates how to evaluate the kotoba-whisper model on the Japanese subset of the CommonVoice 8.0.
|
246 |
+
First, we need to install the required packages, including 🤗 Datasets to load the audio data, and 🤗 Evaluate to
|
247 |
+
perform the WER calculation:
|
248 |
+
|
249 |
+
```bash
|
250 |
+
pip install --upgrade pip
|
251 |
+
pip install --upgrade transformers datasets[audio] evaluate jiwer
|
252 |
+
```
|
253 |
+
|
254 |
+
Evaluation can then be run end-to-end with the following example:
|
255 |
+
|
256 |
+
```python
|
257 |
+
import torch
|
258 |
+
from transformers import pipeline
|
259 |
+
from datasets import load_dataset
|
260 |
+
from evaluate import load
|
261 |
+
from transformers.models.whisper.english_normalizer import BasicTextNormalizer
|
262 |
+
|
263 |
+
# model config
|
264 |
+
model_id = "kotoba-tech/kotoba-whisper-v1.0"
|
265 |
+
torch_dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
|
266 |
+
device = "cuda:0" if torch.cuda.is_available() else "cpu"
|
267 |
+
model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
|
268 |
+
generate_kwargs = {"language": "japanese", "task": "transcribe"}
|
269 |
+
normalizer = BasicTextNormalizer()
|
270 |
+
|
271 |
+
# data config
|
272 |
+
dataset_name = "japanese-asr/ja_asr.reazonspeech_test"
|
273 |
+
audio_column = 'audio'
|
274 |
+
text_column = 'transcription'
|
275 |
+
|
276 |
+
# load model
|
277 |
+
pipe = pipeline(
|
278 |
+
"automatic-speech-recognition",
|
279 |
+
model=model_id,
|
280 |
+
torch_dtype=torch_dtype,
|
281 |
+
device=device,
|
282 |
+
model_kwargs=model_kwargs,
|
283 |
+
batch_size=16
|
284 |
+
)
|
285 |
+
|
286 |
+
# load the dataset and sample the audio with 16kHz
|
287 |
+
dataset = load_dataset(dataset_name, split="test")
|
288 |
+
transcriptions = pipe(dataset['audio'])
|
289 |
+
transcriptions = [normalizer(i['text']).replace(" ", "") for i in transcriptions]
|
290 |
+
references = [normalizer(i).replace(" ", "") for i in dataset['transcription']]
|
291 |
+
|
292 |
+
# compute the CER metric
|
293 |
+
cer_metric = load("cer")
|
294 |
+
cer = 100 * cer_metric.compute(predictions=transcriptions, references=references)
|
295 |
+
print(cer)
|
296 |
+
```
|
297 |
+
|
298 |
+
The huggingface links to the major Japanese ASR datasets for evaluation are summarized at [here](https://huggingface.co/collections/japanese-asr/japanese-asr-evaluation-dataset-66051a03d6ca494d40baaa26).
|
299 |
+
For example, to evaluate the model on JSUT Basic5000, change the `dataset_name`:
|
300 |
+
|
301 |
+
```diff
|
302 |
+
- dataset_name = "japanese-asr/ja_asr.reazonspeech_test"
|
303 |
+
+ dataset_name = "japanese-asr/ja_asr.jsut_basic5000"
|
304 |
+
```
|
305 |
+
|
306 |
+
## Acknowledgements
|
307 |
+
* [OpenAI](https://openai.com/) for the Whisper [model](https://huggingface.co/openai/whisper-large-v3).
|
308 |
+
* Hugging Face 🤗 [Transformers](https://github.com/huggingface/transformers) for the model integration.
|
309 |
+
* Hugging Face 🤗 for the [Distil-Whisper codebase](https://github.com/huggingface/distil-whisper).
|
310 |
+
* [Reazon Human Interaction Lab](https://research.reazon.jp/) for the [ReazonSpeech dataset](https://huggingface.co/datasets/reazon-research/reazonspeech).
|