Datasets:
audio
audioduration (s) 8.13
22.6
|
---|
MultiSeg Dataset for ASR Hallucinations
Description
MultiSeg is a perturbed and altered version of the TEDLIUM3 dataset, specifically created for evaluating the robustness of Automatic Speech Recognition (ASR) systems. This dataset is derived from the 'speakeroverlap' subset, which consists of held-back training data from TEDLIUM3.
Purpose
The primary purpose of the MultiSeg dataset is to:
- Elicit hallucinations from ASR systems
- Evaluate ASR performance under various perturbation conditions
- Assess the impact of speaker-dependent factors on ASR accuracy
Dataset Creation
The MultiSeg dataset was created by applying the following modifications to the original TEDLIUM3 'speakeroverlap' subset:
- Concatenation of two speech segments
- Injection of silence between speech segments
- Application of variable Signal-to-Noise Ratio (SNR)
- Addition of reverberation effects
These modifications aim to simulate real-world challenging conditions for ASR systems.
Usage
To use this dataset:
- Download the dataset from the Hugging Face repository
- Load the audio files and corresponding transcriptions
- Use the dataset to evaluate the hallucinatory tendencies of your ASR system.
- Hallucination measurement algorithm to follow shortly on my GitHub.
Original Dataset Information
This dataset is derived from TEDLIUM3, which is released under the Creative Commons BY-NC-ND 3.0 license. TEDLIUM3 release 1 François Hernandez, Vincent Nguyen, Sahar Ghannay, Natalia Tomashenko, and Yannick Estève, "TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation", submitted to SPECOM 2018.
- Downloads last month
- 61