metadata
license: apache-2.0
base_model:
- techiaith/wav2vec2-xlsr-53-ft-btb-cv-cy
tags:
- automatic-speech-recognition
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-btb-cv-ft-cv-cy
results: []
datasets:
- techiaith/commonvoice_18_0_cy
language:
- cy
pipeline_tag: automatic-speech-recognition
wav2vec2-btb-cv-ft-cv-cy
This model is a version of techiaith/wav2vec2-xlsr-53-ft-btb-cv-cy fine-tuned with its encoder frozen and the training set commonvoice_cy_18
It achieves the following results on the Welsh Common Voice version 18 standard test set:
- WER: 24.93
- CER: 6.55
However, when the accompanying KenLM language model is used, it achieves the following results on the same test set:
- WER: 15.30
- CER: 4.57
Usage
wav2vec2 acoustic model only...
import torch
import torchaudio
import librosa
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy")
model = Wav2Vec2ForCTC.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy")
audio, rate = librosa.load(audio_file, sr=16000)
inputs = processor(audio, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
# greedy decoding
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
with language model...
import torch
import torchaudio
import librosa
from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM
processor = Wav2Vec2ProcessorWithLM.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy")
model = Wav2Vec2ForCTC.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy")
audio, rate = librosa.load(audio_file, sr=16000)
inputs = processor(audio, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
# ctc decoding
print("Prediction:", processor.batch_decode(logits.numpy()).text[0])