Edit model card

Wav2Vec2-Base-VoxPopuli

Facebook's Wav2Vec2 base model pretrained on the es unlabeled subset of VoxPopuli corpus.

Paper: VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation

Authors: Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux from Facebook AI

See the official website for more information, here

Fine-Tuning

Please refer to this blog on how to fine-tune this model on a specific language. Note that you should replace "facebook/wav2vec2-large-xlsr-53" with this checkpoint for fine-tuning.

Downloads last month
23
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including facebook/wav2vec2-base-es-voxpopuli