Transformers
PyTorch
wav2vec2
pretraining
speech
xls_r
xls_r_pretrained
Inference Endpoints
patrickvonplaten commited on
Commit
06ba4de
1 Parent(s): 3760fd4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -14,9 +14,11 @@ license: apache-2.0
14
  ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/xls_r.png)
15
 
16
 
17
- [Facebook's Wav2Vec2 XLS-R](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
18
 
19
- XLS-R is Facebook AI's large-scale multilingual pretrained model for speech (the "XLM-R for Speech"). It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL and VoxLingua107. Is uses the wav2vec 2.0 objective, in 128 languages. When using the model make sure that your speech input is sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Translation or Classification. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information about ASR.
 
 
20
 
21
  [XLS-R Paper](https://arxiv.org/abs/)
22
 
@@ -29,7 +31,7 @@ The original model can be found under https://github.com/pytorch/fairseq/tree/ma
29
 
30
  # Usage
31
 
32
- See [this notebook](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers.ipynb) for more information on how to fine-tune the model.
33
 
34
  You can find other pretrained XLS-R models with different numbers of parameters:
35
  * [300M parameters version](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
 
14
  ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/xls_r.png)
15
 
16
 
17
+ [Facebook's Wav2Vec2 XLS-R](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) containing **300 million** parameters.
18
 
19
+ XLS-R is Facebook AI's large-scale multilingual pretrained model for speech (the "XLM-R for Speech"). It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages. When using the model make sure that your speech input is sampled at 16kHz.
20
+
21
+ **Note**: This model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Translation, or Classification. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for more information about ASR.
22
 
23
  [XLS-R Paper](https://arxiv.org/abs/)
24
 
 
31
 
32
  # Usage
33
 
34
+ See [this google colab](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLS_R_on_Common_Voice.ipynb) for more information on how to fine-tune the model.
35
 
36
  You can find other pretrained XLS-R models with different numbers of parameters:
37
  * [300M parameters version](https://huggingface.co/facebook/wav2vec2-xls-r-300m)