--- dataset_info: features: - name: id dtype: string - name: language dtype: string - name: audio dtype: audio: sampling_rate: 16000 splits: - name: train num_bytes: 54665637580 num_examples: 423 download_size: 53917768734 dataset_size: 54665637580 configs: - config_name: default data_files: - split: train path: data/train-* license: cc-by-nc-sa-4.0 language: - multilingual task_categories: - audio-to-audio - audio-classification --- Jesus Dramas is a collection of religious audio dramas across 430 languages. In total, there is around 640 hours of audio. It can be used for language identification, spoken language modelling, or speech representation learning. This dataset includes the raw unsegmented audio in a 16kHz single channel format. Each audio drama can have multiple speakers, for both male and female voices. It can be segmented into utterances with a voice activity detection (VAD) model such as this [one](https://github.com/wiseman/py-webrtcvad). The original audio sources wwere crawled from [InspirationalFilms](https://www.inspirationalfilms.com/). We use this corpus to train [XEUS](https://huggingface.co/espnet/xeus), a multilingual speech encoder for 4000+ languages. For more details about the dataset and its usage, please refer to our [paper](https://wanchichen.github.io/pdf/xeus.pdf) or [project page](https://www.wavlab.org/activities/2024/xeus/). ## Usage ```python from datasets import load_dataset dataset = load_dataset("espnet/jesus_dramas") ``` Each example in the dataset has three fields: ``` { 'id': the utterance id, 'language': the language name 'audio': the raw audio } ``` ## License and Acknowledgement Jesus Dramas is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license. If you use this dataset, we ask that you cite our paper: ``` @misc{chen2024robustspeechrepresentationlearning, title={Towards Robust Speech Representation Learning for Thousands of Languages}, author={William Chen and Wangyou Zhang and Yifan Peng and Xinjian Li and Jinchuan Tian and Jiatong Shi and Xuankai Chang and Soumi Maiti and Karen Livescu and Shinji Watanabe}, year={2024}, eprint={2407.00837}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2407.00837}, } ``` And attribute the original creators of the data.