language:
- bn
license: cc-by-nc-4.0
task_categories:
- automatic-speech-recognition
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: duration
dtype: float64
- name: category
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 219091915.875
num_examples: 1753
download_size: 214321460
dataset_size: 219091915.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
MegaBNSpeech
This model is based on a study aimed at tackling one of the primary challenges in developing Automatic Speech Recognition (ASR) for low-resource languages (Bangla): the limited access to domain-specific labeled data. To address this, the study introduces a pseudo-labeling approach to develop a domain-agnostic ASR dataset.
The methodology led to the creation of a robust 20k+ hours labeled Bangla speech dataset, which encompasses a wide variety of topics, speaking styles, dialects, noisy environments, and conversational scenarios. Using this data, a conformer-based ASR system was designed. The effectiveness of the model, especially when trained on pseudo-labeled data, was benchmarked against publicly available datasets and compared with other models. The research promises that experimental resources stemming from this study will be made publicly available.
How to use:
The datasets library provides the capability to load and process your dataset efficiently using just Python. You can easily download and set up the dataset on your local drive with a single call using the load_dataset function.
from datasets import load_dataset
dataset = load_dataset("hishab/MegaBNSpeech", split="train")
With the datasets library, you have the option to stream the dataset in real-time by appending the streaming=True parameter to the load_dataset function. In streaming mode, the dataset loads one sample at a time instead of storing the whole dataset on the disk.
from datasets import load_dataset
dataset = load_dataset("hishab/MegaBNSpeech", split="train", streaming=True)
print(next(iter(dataset)))
Speech Recognition (ASR)
from datasets import load_dataset
mega_bn_asr = load_dataset("hishab/MegaBNSpeech")
# see structure
print(mega_bn_asr)
# load audio sample on the fly
audio_input = mega_bn_asr["train"][0]["audio"] # first decoded audio sample
transcription = mega_bn_asr["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
Data Structure
- The dataset was developed using a pseudo-labeling approach.
- The largest collection of Bangla audio-video data was curated and cleaned from various Bangla TV channels on YouTube. This data covers varying domains, speaking styles, dialects, and communication channels.
- Alignments from two ASR systems were leveraged to segment and automatically annotate the audio segments.
- The created dataset was used to design an end-to-end state-of-the-art Bangla ASR system.
Data Instances
- Size of downloaded dataset files: ___ GB
- Size of the generated dataset: ___ MB
- Total amount of disk used: ___ GB
An example of a data instance looks as follows:
{
"id": 0,
"audio_path": "data/train/wav/UCPREnbhKQP-hsVfsfKP-mCw_id_2kux6rFXMeM_85.wav",
"transcription": "পরীক্ষার মূল্য তালিকা উন্মুক্ত স্থানে প্রদর্শনের আদেশ দেন এই আদেশ পাওয়ার",
"duration": 5.055
}
Data Fields
The data fields are written below.
- id (int): ID of audio sample
- audio_path (str): Path to the audio file
- transcription (str): Transcription of the audio file
- duration : 5.055
Dataset Creation
The dataset was developed using a pseudo-labeling approach. An extensive, large-scale, and high-quality speech dataset of approximately 20,000 hours was developed for domain-agnostic Bangla ASR.
Social Impact of Dataset
Limitations
Citation Information
You can access the MegaBNSpeech paper at _________________ Please cite the paper when referencing the MegaBNSpeech corpus as:
@article{_______________,
title = {_______________________________},
author = {___,___,___,___,___,___,___,___},
journal={_______________________________},
url = {_________________________________},
year = {2023},