Datasets:

Modalities:
Audio
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
wanchichen commited on
Commit
3b86056
1 Parent(s): 8a2157c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -1
README.md CHANGED
@@ -4045,4 +4045,29 @@ language:
4045
  - zyn
4046
  - zyp
4047
  - zzj
4048
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4045
  - zyn
4046
  - zyp
4047
  - zzj
4048
+ ---
4049
+
4050
+ MMS ulab v2 is a a massively multilingual speech dataset that contains **8900 hours** of unlabeled speech across **4023 languages**. It is a reproduced and extended version of the MMS ulab dataset originally proposed in [Scaling Speech Technology to 1000+ Languages](https://arxiv.org/abs/2305.13516).
4051
+ This dataset includes the raw unsegmented audio in a 16kHz single channel format. It can be segmented into utterances with a voice activity detection (VAD) model such as [this one](https://github.com/wiseman/py-webrtcvad).
4052
+ We use 6700 hours of MMS ulab v2 (post-segmentation) to train [XEUS](), a multilingual speech encoder for 4000+ languages.
4053
+
4054
+ ## License and Acknowledgement
4055
+
4056
+ MMS ulab v2 is released under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license.
4057
+
4058
+ If you use this dataset, we ask that you cite the following papers:
4059
+
4060
+ ```
4061
+ @article{pratap2024scaling,
4062
+ title={Scaling speech technology to 1,000+ languages},
4063
+ author={Pratap, Vineel and Tjandra, Andros and Shi, Bowen and Tomasello, Paden and Babu, Arun and Kundu, Sayani and Elkahky, Ali and Ni, Zhaoheng and Vyas, Apoorv and Fazel-Zarandi, Maryam and others},
4064
+ journal={Journal of Machine Learning Research},
4065
+ volume={25},
4066
+ number={97},
4067
+ pages={1--52},
4068
+ year={2024}
4069
+ }
4070
+
4071
+ ```
4072
+
4073
+ And also reference [The Global Recordings Network](https://globalrecordings.net/en/copyright) the original source of the data.