bambara-tts / README.md
oza75's picture
Upload dataset
735247f verified
---
language:
- bm
- fr
license: cc-by-sa-4.0
task_categories:
- text-to-speech
dataset_info:
- config_name: default
features:
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: bambara
dtype: string
- name: french
dtype: string
- name: duration
dtype: float64
- name: speaker_embeddings
sequence: float32
- name: speaker_id
dtype: int32
splits:
- name: train
num_bytes: 3349350881.55
num_examples: 30765
download_size: 3236187232
dataset_size: 3349350881.55
- config_name: denoised
features:
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: bambara
dtype: string
- name: french
dtype: string
- name: duration
dtype: float64
- name: speaker_embeddings
sequence: float32
- name: speaker_id
dtype: int32
splits:
- name: train
num_bytes: 8746406033.55
num_examples: 30765
download_size: 7617758070
dataset_size: 8746406033.55
- config_name: enhanced
features:
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: bambara
dtype: string
- name: french
dtype: string
- name: duration
dtype: float64
- name: speaker_embeddings
sequence: float32
- name: speaker_id
dtype: int32
splits:
- name: train
num_bytes: 4007425321.55
num_examples: 30765
download_size: 3300189350
dataset_size: 4007425321.55
- config_name: jeli_asr
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: bambara
dtype: string
- name: french
dtype: string
- name: duration
dtype: float64
- name: speaker_embeddings
sequence: float64
- name: speaker_id
dtype: int32
splits:
- name: train
num_bytes: 2810771347.45
num_examples: 26335
download_size: 2674156876
dataset_size: 2810771347.45
- config_name: jeli_asr_denoised
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: bambara
dtype: string
- name: french
dtype: string
- name: duration
dtype: float64
- name: speaker_id
dtype: int32
- name: speaker_embeddings
sequence: float64
splits:
- name: train
num_bytes: 7549806425.45
num_examples: 26335
download_size: 6487714877
dataset_size: 7549806425.45
- config_name: jeli_asr_enhanced
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: bambara
dtype: string
- name: french
dtype: string
- name: duration
dtype: float64
- name: speaker_embeddings
sequence: float32
- name: speaker_id
dtype: int32
splits:
- name: train
num_bytes: 2756891639.45
num_examples: 26335
download_size: 2205844679
dataset_size: 2756891639.45
- config_name: mali_pense
features:
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: bambara
dtype: string
- name: french
dtype: string
- name: duration
dtype: float64
- name: speaker_embeddings
sequence: float32
- name: speaker_id
dtype: int32
splits:
- name: train
num_bytes: 592513748.1
num_examples: 4430
download_size: 590736972
dataset_size: 592513748.1
- config_name: mali_pense_denoised
features:
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: bambara
dtype: string
- name: french
dtype: string
- name: duration
dtype: float64
- name: speaker_embeddings
sequence: float32
- name: speaker_id
dtype: int32
splits:
- name: train
num_bytes: 1250533816.1
num_examples: 4430
download_size: 1160807299
dataset_size: 1250533816.1
- config_name: mali_pense_enhanced
features:
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: bambara
dtype: string
- name: french
dtype: string
- name: duration
dtype: float64
- name: speaker_embeddings
sequence: float32
- name: speaker_id
dtype: int32
splits:
- name: train
num_bytes: 1250533816.1
num_examples: 4430
download_size: 1093970716
dataset_size: 1250533816.1
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: denoised
data_files:
- split: train
path: denoised/train-*
- config_name: enhanced
data_files:
- split: train
path: enhanced/train-*
- config_name: jeli_asr
data_files:
- split: train
path: jeli_asr/train-*
- config_name: jeli_asr_denoised
data_files:
- split: train
path: jeli_asr_denoised/train-*
- config_name: jeli_asr_enhanced
data_files:
- split: train
path: jeli_asr_enhanced/train-*
- config_name: mali_pense
data_files:
- split: train
path: mali_pense/train-*
- config_name: mali_pense_denoised
data_files:
- split: train
path: mali_pense_denoised/train-*
- config_name: mali_pense_enhanced
data_files:
- split: train
path: mali_pense_enhanced/train-*
---
# Overview
## Project
This dataset is part of a larger initiative dedicated to enabling Bambara speakers to access global knowledge without language barriers.
Our goal is to eliminate the need for Bambara speakers to learn a secondary language before they can acquire new information or skills.
By providing a robust dataset for Text-to-Speech (TTS) applications, we aim to support the creation of tools for bambara language, thus democratizing access to knowledge.
## Bambara Language
Bambara, also known as Bamanankan, is a Mande language spoken primarily in Mali by millions of people as a mother tongue and second language.
It serves as a lingua franca in Mali and is also spoken in neighboring countries (Burkina Faso, Ivory Coast etc...). Bambara is written in both the Latin script and N'Ko script,
and it has a rich oral tradition that is integral to Malian culture.
# Dataset
## Source
The dataset was meticulously compiled with a focus on quality and utility. The source materials were obtained from a rich Bambara content available at [Mali Pense](https://www.mali-pense.net/).
Audio recordings were carefully processed to improve clarity and usability.
## Processing
Noise reduction was a critical step in preparing the audio data to ensure high-quality samples. This was achieved using **DeepFilterNet**,
an advanced noise suppression algorithm accessible on GitHub [here](https://github.com/Rikorose/DeepFilterNet). The resulting clean audio provides clear and usable samples for TTS development.
To enhance the dataset's applicability in personalized TTS systems, speaker embeddings were generated using the [pyannote/embedding](https://huggingface.co/pyannote/embedding) model from Huggingface.
This embedding captures unique speaker characteristics, allowing for speaker identification and differentiation in TTS applications.
## Clustering
Speaker embeddings were clustered using the [HDBSCAN](https://hdbscan.readthedocs.io/en/latest/how_hdbscan_works.html) algorithm *(via the hdbscan pip3 package)* to infer speaker identities within the dataset.
While this clustering offers a basis for differentiating speakers, it is not **infallible**. Users are encouraged **to use the provided embeddings to refine** or generate their own speaker identification as needed for their specific applications.
## Dataset Structure
### Data Fields
The dataset includes the following fields:
- audio: This field contains the file path (loaded via huggingface datasets library) to the audio recording of spoken Bambara text. Each audio file corresponds to a single utterance of spoken text.
- bambara: A string field that contains the transcription of the spoken text in the Bambara language. This transcription corresponds to the content of the audio file.
- french: A string field with the French translation of the Bambara text. This provides a parallel corpus for those interested in bilingual applications.
- duration: A float64 field that represents the duration of the audio clip in seconds. It gives an indication of the length of the spoken utterance.
- speaker_embeddings: A sequence field that holds the numerical vector representing the speaker's voice characteristics. This embedding can be used for speaker identification or distinguishing between different speakers in the dataset.
- speaker_id: An int32 field that indicates the cluster ID assigned to the speaker based on the HDBSCAN algorithm. This ID helps to identify all utterances from the same speaker across the dataset.
### Data Instances
An example from the dataset looks like this:
```json
{
"audio": Audio({"array": [-2.5, 35...], "path": "path/to/audio.wav", "sampling_rate": 48000}),
"bambara": "Jigi, i bolo degunnen don wa ?",
"french": "Jigi, es-tu occupé ?",
"duration": 2.646,
"speaker_embeddings": [-2.564516305923462, -20.928389595581055, ...],
"speaker_id": 5
}
```
### Usage
The dataset is designed for a variety of uses in the field of speech technology, including:
- **Text-to-Speech Synthesis:** Researchers and developers can utilize this dataset to train and fine-tune TTS models capable of converting Bambara text into natural-sounding speech.
- **Speech Recognition:** The audio samples can aid in the development of Automatic Speech Recognition (ASR) systems that transcribe Bambara speech.
- **Linguistic Research:** Linguists can explore the phonetic and prosodic features of Bambara speech.
- **Educational Content Creation:** Educators and content creators can develop voice-enabled educational resources in Bambara.
# Acknowledgements
This project was made possible through the contributions of various individuals and organizations dedicated to preserving and promoting the **Bambara language and culture**.
We extend our gratitude to [Mali Pense](https://www.mali-pense.net/) for providing the text sources, [Rikorose/DeepFilterNet](https://github.com/Rikorose/DeepFilterNet) for the noise reduction technology, and [Pyannote](https://huggingface.co/pyannote) for the speaker embedding model.
# Other Bambara Dataset
- Bambara French Parallel dataset: https://www.kaggle.com/datasets/ozaresearch1/bambara-french-parallel-dataset
- Corpus Bambara de reference: http://cormand.huma-num.fr/index.html
- Dictionnaries & other resources: https://www.lexilogos.com/bambara_dictionnaire.htm