mzboito commited on
Commit
f9c4cfe
1 Parent(s): 333a174

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -17
README.md CHANGED
@@ -123,33 +123,54 @@ language:
123
  - zh
124
  ---
125
 
126
- ## mHuBERT-147 models
 
127
 
128
- mHuBERT-147 are compact and competitive multilingual HuBERT models trained on 90K hours of open-license data in 147 languages.
129
 
130
- This repository contains:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
  * Fairseq checkpoint (original);
132
- * HuggingFace checkpoint;
133
  * Faiss index for continuous pre-training (OPQ16_64,IVF1000_HNSW32,PQ16x4fsr).
134
 
 
 
 
 
135
 
136
- # Additional Information
137
-
138
 
139
- **Manifest list:** https://huggingface.co/utter-project/mHuBERT-147-base-3rd-iter/tree/main/manifest
140
 
141
- Please note that since training, there were CommonVoice removal requests. This means that some of the listed files are no longer available.
142
 
143
- **Fairseq fork:** https://github.com/utter-project/fairseq
144
 
145
- **Scripts for pre-processing/faiss clustering:** https://github.com/utter-project/mHuBERT-147-scripts
146
 
147
- **Languages present not indexed by Huggingface:** Asturian (ast), Basaa (bas), Cebuano (ceb), Central Kurdish/Sorani (ckb), Hakha Chin (cnh), Hawaiian (haw), Upper Sorbian (hsb) Kabyle (kab), Moksha (mdf), Meadow Mari (mhr), Hill Mari (mrj), Erzya (myv), Taiwanese Hokkien (nan-tw), Sursilvan (rm-sursilv), Vallader (rm-vallader), Sakha (sah), Santali (sat), Scots (sco), Saraiki (skr), Tigre (tig), Tok Pisin (tpi), Akwapen Twi (tw-akuapem), Asante Twi (tw-asante), Votic (vot), Waray (war), Cantonese (yue).
 
148
 
 
149
 
150
- # Datasets Included
151
 
152
- For ASR/ST/TTS datasets, only train set is used.
153
  * [Aishell](https://www.openslr.org/33/) and [AISHELL-3](https://www.openslr.org/93/)
154
  * [BibleTTS](https://www.openslr.org/129/)
155
  * [ClovaCall](https://github.com/clovaai/ClovaCall)
@@ -166,8 +187,10 @@ For ASR/ST/TTS datasets, only train set is used.
166
  * [VoxLingua107](https://bark.phon.ioc.ee/voxlingua107/)
167
  * [VoxPopuli](https://github.com/facebookresearch/voxpopuli/)
168
 
 
 
169
 
170
- # Citing
171
 
172
  ```
173
  @inproceedings{boito2024mhubert,
@@ -178,9 +201,6 @@ booktitle={Interspeech 2024},
178
  }
179
  ```
180
 
181
-
182
- # Funding
183
-
184
  <img src="https://cdn-uploads.huggingface.co/production/uploads/62262e19d36494a6f743a28d/HbzC1C-uHe25ewTy2wyoK.png" width=7% height=7%>
185
  This is an output of the European Project UTTER (Unified Transcription and Translation for Extended Reality) funded by European Union’s Horizon Europe Research and Innovation programme under grant agreement number 101070631.
186
 
 
123
  - zh
124
  ---
125
 
126
+ **This repository contains the SECOND ITERATION mHuBERT-147 model.**
127
+ **The best mHuBERT-147 model is available [here](https://huggingface.co/utter-project/mHuBERT-147).**
128
 
 
129
 
130
+ **MODEL DETAILS:** 2nd iteration, K=1000, HuBERT base architecture (95M parameters), 147 languages.
131
+
132
+ # Table of Contents:
133
+
134
+ 1. [Summary](https://huggingface.co/utter-project/mHuBERT-147#mhubert-147-models)
135
+ 2. [Training Data and Code](https://huggingface.co/utter-project/mHuBERT-147#training)
136
+ 3. [ML-SUPERB Scores](https://huggingface.co/utter-project/mHuBERT-147#ml-superb-scores)
137
+ 4. [Languages and Datasets](https://huggingface.co/utter-project/mHuBERT-147#languages-and-datasets)
138
+ 6. [Citing and Funding Information](https://huggingface.co/utter-project/mHuBERT-147#citing-and-funding-information)
139
+
140
+ # mHuBERT-147 models
141
+
142
+ mHuBERT-147 are compact and competitive multilingual HuBERT models trained on 90K hours of open-license data in 147 languages.
143
+ Different from *traditional* HuBERTs, mHuBERT-147 models are trained using faiss IVF discrete speech units.
144
+ Training employs a two-level language, data source up-sampling during training. See more information in [our paper](https://arxiv.org/pdf/2406.06371).
145
+
146
+ **This repository contains:**
147
  * Fairseq checkpoint (original);
148
+ * HuggingFace checkpoint (conversion using transformers library);
149
  * Faiss index for continuous pre-training (OPQ16_64,IVF1000_HNSW32,PQ16x4fsr).
150
 
151
+ **Related Models:**
152
+ * [3rd Iteration mHuBERT-147](https://huggingface.co/utter-project/mHuBERT-147) (best)
153
+ * [1st Iteration mHuBERT-147](https://huggingface.co/utter-project/mHuBERT-147-base-1st-iter)
154
+ * [HUTTER-12 CommonVoice Prototype (12 languages)](https://huggingface.co/utter-project/hutter-12-3rd-base)
155
 
156
+ # Training
 
157
 
158
+ * **[Manifest list available here.](https://huggingface.co/utter-project/mHuBERT-147-base-3rd-iter/tree/main/manifest)** Please note that since training, there were CommonVoice removal requests. This means that some of the listed files are no longer available.
159
 
160
+ * **[Fairseq fork](https://github.com/utter-project/fairseq)** contains the scripts for training with multilingual batching with two-level up-sampling.
161
 
162
+ * **[Scripts for pre-processing/faiss clustering available here.](https://github.com/utter-project/mHuBERT-147-scripts)**
163
 
164
+ # ML-SUPERB Scores
165
 
166
+ mHubert-147 reaches second and first position in the 10min and 1h leaderboards respectively. We achieve new SOTA scores for three LID tasks.
167
+ See more information in [our paper](https://arxiv.org/pdf/2406.06371).
168
 
169
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62262e19d36494a6f743a28d/chXjExnWc3rhhtdsyiU-W.png)
170
 
171
+ # Languages and Datasets
172
 
173
+ **Datasets:** For ASR/ST/TTS datasets, only train set is used.
174
  * [Aishell](https://www.openslr.org/33/) and [AISHELL-3](https://www.openslr.org/93/)
175
  * [BibleTTS](https://www.openslr.org/129/)
176
  * [ClovaCall](https://github.com/clovaai/ClovaCall)
 
187
  * [VoxLingua107](https://bark.phon.ioc.ee/voxlingua107/)
188
  * [VoxPopuli](https://github.com/facebookresearch/voxpopuli/)
189
 
190
+ **Languages present not indexed by Huggingface:** Asturian (ast), Basaa (bas), Cebuano (ceb), Central Kurdish/Sorani (ckb), Hakha Chin (cnh), Hawaiian (haw), Upper Sorbian (hsb) Kabyle (kab), Moksha (mdf), Meadow Mari (mhr), Hill Mari (mrj), Erzya (myv), Taiwanese Hokkien (nan-tw), Sursilvan (rm-sursilv), Vallader (rm-vallader), Sakha (sah), Santali (sat), Scots (sco), Saraiki (skr), Tigre (tig), Tok Pisin (tpi), Akwapen Twi (tw-akuapem), Asante Twi (tw-asante), Votic (vot), Waray (war), Cantonese (yue).
191
+
192
 
193
+ # Citing and Funding Information
194
 
195
  ```
196
  @inproceedings{boito2024mhubert,
 
201
  }
202
  ```
203
 
 
 
 
204
  <img src="https://cdn-uploads.huggingface.co/production/uploads/62262e19d36494a6f743a28d/HbzC1C-uHe25ewTy2wyoK.png" width=7% height=7%>
205
  This is an output of the European Project UTTER (Unified Transcription and Translation for Extended Reality) funded by European Union’s Horizon Europe Research and Innovation programme under grant agreement number 101070631.
206