library_name: transformers
license: mit
Model Card for mhubert-base-25hz
This is a version of Hubert by Meta. This version was introduced in TWIST and showed lots of value as a speech tokeniser for training SpeechLMs.
These model weights were converted by SLP-RL from the original Textlesslib release.
Model Details
Model Description
This Hubert model was introduced in TWIST we encourage you to look there for the full details.
It was trained on a varied mixture of datasets: Multilingual LS, Vox Populi, Common Voice, Spotify, and Fisher. This Hubert base model was trained for 3 iterations with the default 50Hz features rate. For the 4-th iteration, they add an additional convolutional layer at the CNN Encoder with the stride 2, resulting in features of 25Hz.
We converted the original Fairseq release to Huggingface🤗 using the conversion script, after adding support, and asserted that the results are identical.
- Developed by: Hassid et. al
- Shared by: SLP-RL
- Model type:
transformers.HubertModel
- Languages: Multi-lingual
- License: MIT, see textlesslib license for full details
Model Sources
- Repository: https://github.com/facebookresearch/textlesslib/tree/main/examples/twist
- Paper: https://arxiv.org/abs/2305.13009
Uses
This is a base HubertModel and as such is useful as a feature extractor for speech tokenisation for usages such as Spoken Language Modelling or Speaking Style Conversion.
How to Get Started with the Model
This model requires a new transformers version transformers>=??
, so make sure you have it installed. Afterwards it can be used as
follows:
from transformers import HubertModel
model = HubertModel.from_pretrained('slprl/mhubert-base-25hz')
Citation
BibTeX:
@article{hassid2024textually,
title={Textually pretrained speech language models},
author={Hassid, Michael and Remez, Tal and Nguyen, Tu Anh and Gat, Itai and Conneau, Alexis and Kreuk, Felix and Copet, Jade and Defossez, Alexandre and Synnaeve, Gabriel and Dupoux, Emmanuel and others},
journal={Advances in Neural Information Processing Systems},
volume={36},
year={2024}
}