|
--- |
|
license: mit |
|
datasets: |
|
- mozilla-foundation/common_voice_16_1 |
|
language: |
|
- es |
|
library_name: transformers |
|
pipeline_tag: automatic-speech-recognition |
|
tags: |
|
- spanish |
|
- español |
|
- speech |
|
- recognition |
|
- whisper |
|
- distil-whisper |
|
--- |
|
|
|
# distil-whisper-large-v3-es |
|
This is the repository for a distilled version of the [Whisper v3 large model](https://huggingface.co/openai/whisper-large-v3) trained on the [Mozilla Common Voice dataset v16.1](https://huggingface.co/datasets/mozilla-foundation/common_voice_16_1). |
|
This model was possible through the collaboration of [SandboxAI](https://sandbox-ai.github.io) and the [Universidad Nacional de Rio Negro](https://www.unrn.edu.ar/home) |
|
|
|
## Usage |
|
|
|
Distil-Whisper is supported in Hugging Face 🤗 Transformers from version 4.35 onwards. To run the model, first |
|
install the latest version of the Transformers library. For this example, we'll also install 🤗 Datasets to load toy |
|
audio dataset from the Hugging Face Hub: |
|
|
|
```bash |
|
pip install --upgrade pip |
|
pip install --upgrade transformers accelerate datasets[audio] |
|
``` |
|
|
|
### Short-Form Transcription |
|
|
|
The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) |
|
class to transcribe short-form audio files (< 30-seconds) as follows: |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline |
|
from datasets import load_dataset |
|
device = "cuda:0" if torch.cuda.is_available() else "cpu" |
|
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 |
|
model_id = "marianbasti/distil-whisper-large-v3-es" |
|
model = AutoModelForSpeechSeq2Seq.from_pretrained( |
|
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True |
|
) |
|
model.to(device) |
|
processor = AutoProcessor.from_pretrained(model_id) |
|
pipe = pipeline( |
|
"automatic-speech-recognition", |
|
model=model, |
|
tokenizer=processor.tokenizer, |
|
feature_extractor=processor.feature_extractor, |
|
max_new_tokens=128, |
|
torch_dtype=torch_dtype, |
|
device=device, |
|
) |
|
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") |
|
sample = dataset[0]["audio"] |
|
result = pipe(sample) |
|
print(result["text"]) |
|
``` |
|
|
|
To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline: |
|
```diff |
|
- result = pipe(sample) |
|
+ result = pipe("audio.mp3") |
|
``` |
|
|
|
### Long-Form Transcription |
|
|
|
Distil-Whisper uses a chunked algorithm to transcribe long-form audio files (> 30-seconds). In practice, this chunked long-form algorithm |
|
is 9x faster than the sequential algorithm proposed by OpenAI in the Whisper paper (see Table 7 of the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430)). |
|
|
|
To enable chunking, pass the `chunk_length_s` parameter to the `pipeline`. For Distil-Whisper, a chunk length of 15-seconds |
|
is optimal. To activate batching, pass the argument `batch_size`: |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline |
|
from datasets import load_dataset |
|
device = "cuda:0" if torch.cuda.is_available() else "cpu" |
|
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 |
|
model_id = "marianbasti/distil-whisper-large-v3-es" |
|
model = AutoModelForSpeechSeq2Seq.from_pretrained( |
|
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True |
|
) |
|
model.to(device) |
|
processor = AutoProcessor.from_pretrained(model_id) |
|
pipe = pipeline( |
|
"automatic-speech-recognition", |
|
model=model, |
|
tokenizer=processor.tokenizer, |
|
feature_extractor=processor.feature_extractor, |
|
max_new_tokens=128, |
|
chunk_length_s=15, |
|
batch_size=16, |
|
torch_dtype=torch_dtype, |
|
device=device, |
|
) |
|
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation") |
|
sample = dataset[0]["audio"] |
|
result = pipe(sample) |
|
print(result["text"]) |
|
``` |
|
|
|
<!--- |
|
**Tip:** The pipeline can also be used to transcribe an audio file from a remote URL, for example: |
|
|
|
```python |
|
result = pipe("https://huggingface.co/datasets/sanchit-gandhi/librispeech_long/resolve/main/audio.wav") |
|
``` |
|
---> |
|
|
|
### Speculative Decoding |
|
|
|
Distil-Whisper can be used as an assistant model to Whisper for [speculative decoding](https://huggingface.co/blog/whisper-speculative-decoding). |
|
Speculative decoding mathematically ensures the exact same outputs as Whisper are obtained while being 2 times faster. |
|
This makes it the perfect drop-in replacement for existing Whisper pipelines, since the same outputs are guaranteed. |
|
|
|
In the following code-snippet, we load the assistant Distil-Whisper model standalone to the main Whisper pipeline. We then |
|
specify it as the "assistant model" for generation: |
|
|
|
```python |
|
from transformers import pipeline, AutoModelForCausalLM, AutoModelForSpeechSeq2Seq, AutoProcessor |
|
import torch |
|
from datasets import load_dataset |
|
device = "cuda:0" if torch.cuda.is_available() else "cpu" |
|
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 |
|
assistant_model_id = "marianbasti/distil-whisper-large-v3-es" |
|
assistant_model = AutoModelForCausalLM.from_pretrained( |
|
assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True |
|
) |
|
assistant_model.to(device) |
|
model_id = "openai/whisper-large-v3" |
|
model = AutoModelForSpeechSeq2Seq.from_pretrained( |
|
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True |
|
) |
|
model.to(device) |
|
processor = AutoProcessor.from_pretrained(model_id) |
|
pipe = pipeline( |
|
"automatic-speech-recognition", |
|
model=model, |
|
tokenizer=processor.tokenizer, |
|
feature_extractor=processor.feature_extractor, |
|
max_new_tokens=128, |
|
generate_kwargs={"assistant_model": assistant_model}, |
|
torch_dtype=torch_dtype, |
|
device=device, |
|
) |
|
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") |
|
sample = dataset[0]["audio"] |
|
result = pipe(sample) |
|
print(result["text"]) |
|
``` |
|
## Training |
|
|
|
The model was trained for 60,000 optimisation steps (or around 1.47 epochs), on a single RTX3090 for ~60 hours, using the following training parameters: |
|
``` |
|
--teacher_model_name_or_path "openai/whisper-large-v3" |
|
--train_dataset_name "mozilla-foundation/common_voice_16_1" |
|
--train_dataset_config_name "es" |
|
--train_split_name "train" |
|
--text_column_name "sentence" |
|
--eval_dataset_name "mozilla-foundation/common_voice_16_1" |
|
--eval_dataset_config_name "es" |
|
--eval_split_name "validation" |
|
--eval_text_column_name "sentence" |
|
--eval_steps 10000 |
|
--save_steps 10000 |
|
--warmup_steps 500 |
|
--learning_rate 1e-4 |
|
--lr_scheduler_type "linear" |
|
--logging_steps 25 |
|
--save_total_limit 1 |
|
--max_steps 60000 |
|
--wer_threshold 10 |
|
--per_device_train_batch_size 8 |
|
--per_device_eval_batch_size 8 |
|
--dataloader_num_workers 12 |
|
--preprocessing_num_workers 12 |
|
--output_dir "./" |
|
--do_train |
|
--do_eval |
|
--gradient_checkpointing |
|
--predict_with_generate |
|
--overwrite_output_dir |
|
--use_pseudo_labels "false" |
|
--freeze_encoder |
|
--streaming False |
|
``` |
|
|
|
## Results |
|
|
|
The distilled model performs with a 5.11% WER (10.15% orthogonal WER). |
|
|
|
## License |
|
|
|
Distil-Whisper inherits the [MIT license](https://github.com/huggingface/distil-whisper/blob/main/LICENSE) from OpenAI's Whisper model. |
|
|
|
## Citation |
|
|
|
If you use this model, please consider citing the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430): |
|
``` |
|
@misc{gandhi2023distilwhisper, |
|
title={Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling}, |
|
author={Sanchit Gandhi and Patrick von Platen and Alexander M. Rush}, |
|
year={2023}, |
|
eprint={2311.00430}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |