--- dataset_info: config_name: CC_BY_3.0 features: - name: text dtype: string - name: start dtype: float64 - name: end dtype: float64 - name: speaker dtype: string - name: language dtype: string - name: dnsmos dtype: float64 - name: source_podcast dtype: string - name: audio dtype: audio - name: speaker_id dtype: string splits: - name: train num_bytes: 1437253098.316 num_examples: 17942 download_size: 1432758259 dataset_size: 1437253098.316 configs: - config_name: CC_BY_3.0 data_files: - split: train path: CC_BY_3.0/train-* license: cc --- > [!TIP] > This particular dataset only kept the CC-BY 3.0 podcasts, which have been processed using the [Emilia-Pipe](https://github.com/open-mmlab/Amphion/blob/main/preprocessors/Emilia/README.md#emilia-pipe-overview-) with Whisper Large v3. # Some Podcasts Podcasts are taken from the [PodcastFillers dataset](https://podcastfillers.github.io/). The PodcastFillers dataset consists of 199 full-length podcast episodes in English with manually annotated filler words and automatically generated transcripts. The podcast audio recordings, sourced from SoundCloud, are CC-licensed, gender-balanced, and total 145 hours of audio from over 350 speakers. > [!TIP] > This dataset doesn't upload the PodcastFillers annotations, which are under a non-commercial license. See [here](https://podcastfillers.github.io/license/) for more details. ## Length by license type **CC_BY 3.0:** Total length: 51.44h ## License See [here](https://podcastfillers.github.io/license/) for more details. The licenses are also in the metadata. ## Citation Information ``` @inproceedings{Zhu:FillerWords:INTERSPEECH:22, title = {Filler Word Detection and Classification: A Dataset and Benchmark}, booktitle = {23rd Annual Cong.~of the Int.~Speech Communication Association (INTERSPEECH)}, address = {Incheon, Korea}, month = {Sep.}, url = {https://arxiv.org/abs/2203.15135}, author = {Zhu, Ge and Caceres, Juan-Pablo and Salamon, Justin}, year = {2022}, } ``` ### Contributions Thanks to [@ylacombe](https://github.com/ylacombe) for adding this dataset.