language: el
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Greek by Lighteternal
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice el
type: common_voice
args: el
metrics:
- name: Test WER
type: wer
value: 10.497628
- name: Test CER
type: cer
value: 2.87526
Greek (el) version of the XLSR-Wav2Vec2 automatic speech recognition (ASR) model
By the Hellenic Army Academy and the Technical University of Crete
- language: el
- licence: apache-2.0
- dataset: CommonVoice (EL), 364MB: https://commonvoice.mozilla.org/el/datasets + CSS10 (EL), 1.22GB: https://github.com/Kyubyong/css10
- model: XLSR-Wav2Vec2, trained for 50 epochs
- metrics: Word Error Rate (WER)
Model description
UPDATE: We repeated the fine-tuning process using an additional 1.22GB dataset from CSS10.
Wav2Vec2 is a pretrained model for Automatic Speech Recognition (ASR) and was released in September 2020 by Alexei Baevski, Michael Auli, and Alex Conneau. Soon after the superior performance of Wav2Vec2 was demonstrated on the English ASR dataset LibriSpeech, Facebook AI presented XLSR-Wav2Vec2. XLSR stands for cross-lingual speech representations and refers to XLSR-Wav2Vec2`s ability to learn speech representations that are useful across multiple languages.
Similar to Wav2Vec2, XLSR-Wav2Vec2 learns powerful speech representations from hundreds of thousands of hours of speech in more than 50 languages of unlabeled speech. Similar, to BERT's masked language modeling, the model learns contextualized speech representations by randomly masking feature vectors before passing them to a transformer network.
This model was trained on Greek CommonVoice speech data (364MB) for 60 epochs on a single NVIDIA RTX 3080, for aprox. 8hrs.
How to use for inference:
For live demo, make sure that speech files are sampled at 16kHz.
Instructions to test on CommonVoice extracts are provided in the ASR_Inference.ipynb. Snippet also available below:
#!/usr/bin/env python
# coding: utf-8
# Loading dependencies and defining preprocessing functions
from transformers import Wav2Vec2ForCTC
from transformers import Wav2Vec2Processor
from datasets import load_dataset, load_metric
import re
import torchaudio
import librosa
import numpy as np
from datasets import load_dataset, load_metric
import torch
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�]'
def remove_special_characters(batch):
batch["text"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["text"]
return batch
def resample(batch):
batch["speech"] = librosa.resample(np.asarray(batch["speech"]), 48_000, 16_000)
batch["sampling_rate"] = 16_000
return batch
def prepare_dataset(batch):
# check that all files have the correct sampling rate
assert (
len(set(batch["sampling_rate"])) == 1
), f"Make sure all inputs have the same sampling rate of {processor.feature_extractor.sampling_rate}."
batch["input_values"] = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0]).input_values
with processor.as_target_processor():
batch["labels"] = processor(batch["target_text"]).input_ids
return batch
# Loading model and dataset processor
model = Wav2Vec2ForCTC.from_pretrained("lighteternal/wav2vec2-large-xlsr-53-greek").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("lighteternal/wav2vec2-large-xlsr-53-greek")
# Preparing speech dataset to be suitable for inference
common_voice_test = load_dataset("common_voice", "el", split="test")
common_voice_test = common_voice_test.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])
common_voice_test = common_voice_test.map(remove_special_characters, remove_columns=["sentence"])
common_voice_test = common_voice_test.map(speech_file_to_array_fn, remove_columns=common_voice_test.column_names)
common_voice_test = common_voice_test.map(resample, num_proc=8)
common_voice_test = common_voice_test.map(prepare_dataset, remove_columns=common_voice_test.column_names, batch_size=8, num_proc=8, batched=True)
# Loading test dataset
common_voice_test_transcription = load_dataset("common_voice", "el", split="test")
#Performing inference on a random sample. Change the "example" value to try inference on different CommonVoice extracts
example = 123
input_dict = processor(common_voice_test["input_values"][example], return_tensors="pt", sampling_rate=16_000, padding=True)
logits = model(input_dict.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
print("Prediction:")
print(processor.decode(pred_ids[0]))
# πού θέλεις να πάμε ρώτησε φοβισμένα ο βασιλιάς
print("\\\\
Reference:")
print(common_voice_test_transcription["sentence"][example].lower())
# πού θέλεις να πάμε; ρώτησε φοβισμένα ο βασιλιάς.
Evaluation
The model can be evaluated as follows on the Greek test data of Common Voice.
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "el", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("lighteternal/wav2vec2-large-xlsr-53-greek")
model = Wav2Vec2ForCTC.from_pretrained("lighteternal/wav2vec2-large-xlsr-53-greek")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
Test Result: 10.497628 %
How to use for training:
Instructions and code to replicate the process are provided in the Fine_Tune_XLSR_Wav2Vec2_on_Greek_ASR_with_🤗_Transformers.ipynb notebook.
Metrics
Metric | Value |
---|---|
Training Loss | 0.0545 |
Validation Loss | 0.1661 |
CER on CommonVoice Test (%) * | 2.8753 |
WER on CommonVoice Test (%) * | 10.4976 |
* Reference transcripts were lower-cased and striped of punctuation and special characters. |
Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
Based on the tutorial of Patrick von Platen: https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 Original colab notebook here: https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers.ipynb#scrollTo=V7YOT2mnUiea