Feature Extraction
Transformers
Safetensors
hubert
Inference Endpoints
File size: 3,537 Bytes
5f6225b
 
3401b90
5f6225b
 
1789572
5f6225b
1789572
5f6225b
1789572
5f6225b
 
 
 
 
 
1789572
5f6225b
1789572
 
 
5f6225b
e650040
 
 
5f6225b
1789572
 
 
 
 
5f6225b
1789572
5f6225b
1789572
 
5f6225b
 
 
1789572
 
5f6225b
 
 
e650040
 
5f6225b
1789572
 
 
 
5f6225b
 
1789572
5f6225b
 
 
1789572
 
 
 
 
 
 
 
 
5f6225b
1789572
5f6225b
3401b90
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
library_name: transformers
license: mit
---

# Model Card for mhubert-base-25hz

This is a version of [Hubert](https://ai.meta.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression/) by Meta. This version was introduced in [TWIST](https://arxiv.org/abs/2305.13009) and showed lots of value as a speech tokeniser for training SpeechLMs.

These model weights were converted by [SLP-RL](https://www.cs.huji.ac.il/~adiyoss/slprl/index.html) from the original [Textlesslib release](https://github.com/facebookresearch/textlesslib/tree/main/examples/twist).


## Model Details

### Model Description

This Hubert model was introduced in [TWIST](https://arxiv.org/abs/2305.13009) we encourage you to look there for the full details. 

It was trained on a varied mixture of datasets: Multilingual LS, Vox Populi, Common Voice, Spotify, and Fisher. This Hubert base model was 
trained for 3 iterations with the default 50Hz features rate. For the 4-th iteration, they add an additional convolutional layer at the CNN
Encoder with the stride 2, resulting in features of 25Hz.

We converted the original Fairseq release to Huggingface🤗 using the [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/hubert/convert_hubert_original_pytorch_checkpoint_to_pytorch.py), 
after [adding support](https://github.com/huggingface/transformers/pull/34389), 
and asserted that the results are [identical](https://github.com/huggingface/transformers/blob/10feacd88aef9569e240b7e3833ab32b297e4460/tests/models/hubert/test_modeling_hubert.py#L947).

- **Developed by:** Hassid et. al
- **Shared by:** [SLP-RL](https://www.cs.huji.ac.il/~adiyoss/slprl/index.html)
- **Model type:** `transformers.HubertModel`
- **Languages:** Multi-lingual
- **License:** MIT, see [textlesslib license](https://github.com/facebookresearch/textlesslib/blob/main/LICENSE) for full details

### Model Sources

- **Repository:** https://github.com/facebookresearch/textlesslib/tree/main/examples/twist
- **Paper:** https://arxiv.org/abs/2305.13009

## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This is a base HubertModel and as such is useful as a feature extractor for speech tokenisation for usages such as
[Spoken Language Modelling](https://arxiv.org/abs/2409.07437) or [Speaking Style Conversion](https://arxiv.org/abs/2212.09730).

## How to Get Started with the Model

This model  currently requires a a clone of the transformers repository, but will be integrated to the package in coming updates soon. Use version `transformers>=??`, so make sure you have it installed. 
Afterwards it can be used as follows:

```python
from transformers import HubertModel
model = HubertModel.from_pretrained('slprl/mhubert-base-25hz')
```


## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**

```
@article{hassid2024textually,
  title={Textually pretrained speech language models},
  author={Hassid, Michael and Remez, Tal and Nguyen, Tu Anh and Gat, Itai and Conneau, Alexis and Kreuk, Felix and Copet, Jade and Defossez, Alexandre and Synnaeve, Gabriel and Dupoux, Emmanuel and others},
  journal={Advances in Neural Information Processing Systems},
  volume={36},
  year={2024}
}
```

## Model Card Authors

[Gallil Maimon](https://pages.cs.huji.ac.il/gallilmaimon/)