|
--- |
|
dataset_info: |
|
features: |
|
- name: speaker_id |
|
dtype: string |
|
- name: transcription_id |
|
dtype: int64 |
|
- name: text |
|
dtype: string |
|
- name: audio |
|
dtype: |
|
audio: |
|
sampling_rate: 44100 |
|
splits: |
|
- name: train |
|
num_bytes: 12163543668.45736 |
|
num_examples: 18863 |
|
download_size: 10460673849 |
|
dataset_size: 12163543668.45736 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
license: cc0-1.0 |
|
task_categories: |
|
- text-to-speech |
|
language: |
|
- da |
|
pretty_name: CoRal TTS |
|
size_categories: |
|
- 10K<n<100K |
|
--- |
|
|
|
# Dataset Card for CoRal TTS |
|
|
|
## Dataset Description |
|
|
|
- **Repository:** <https://github.com/alexandrainst/coral> |
|
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk) |
|
- **Size of downloaded dataset files:** 14.63 GB |
|
- **Size of the generated dataset:** 15.25 GB |
|
- **Total amount of disk used:** 29.88 GB |
|
|
|
### Dataset Summary |
|
|
|
This dataset consists of two professional Danish speakers, female and male, recording roughly 17 hours of Danish speech each. |
|
|
|
The dataset is part of the [CoRal project](https://alexandra.dk/coral/) which is funded by the [Danish Innovation Fund](https://innovationsfonden.dk/en). |
|
|
|
The text data was selected by the [Alexandra Institute](https://alexandra.dk/about-the-alexandra-institute/) ([Github repo for the dataset creation](https://github.com/alexandrainst/tts_text)) and consists of sentences from [sundhed.dk](https://sundhed.dk/), [borger.dk](https://borger.dk/), names of bus stops and stations, manually filtered Reddit comments, and dates and times. |
|
|
|
The audio data was recorded by the public institution [Nota](https://nota.dk/), which is part of the Danish Ministry of Culture. |
|
|
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
Speech synthesis is the intended tasks for this dataset. No leaderboard is active at this point. |
|
|
|
|
|
### Languages |
|
|
|
The dataset is available in Danish (`da`). |
|
|
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
- **Size of downloaded dataset files:** 14.63 GB |
|
- **Size of the generated dataset:** 15.25 GB |
|
- **Total amount of disk used:** 29.88 GB |
|
|
|
An example from the dataset looks as follows. |
|
``` |
|
{ |
|
'speaker_id': 'mic', |
|
'transcription_id': 0, |
|
'text': '26 rigtige.', |
|
'audio': { |
|
'path': 'mic_00001.wav', |
|
'array': array([-0.00054932, -0.00054932, -0.00061035, ..., 0.00027466, |
|
0.00036621, 0.00030518]), |
|
'sampling_rate': 44100 |
|
} |
|
} |
|
``` |
|
|
|
### Data Fields |
|
|
|
The data fields are the same among all splits. |
|
|
|
- `speaker_id`: a `string` feature. |
|
- `transcription_id`: an `int` feature. |
|
- `text`: a `string` feature. |
|
- `audio`: an `Audio` feature. |
|
|
|
|
|
### Dataset Statistics |
|
|
|
There are 18,863 samples in the dataset. |
|
|
|
|
|
## Additional Information |
|
|
|
### Dataset Curators |
|
|
|
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra |
|
Institute](https://alexandra.dk/) uploaded it to the Hugging Face Hub. |
|
|
|
### Licensing Information |
|
|
|
The dataset is licensed under the [CC0 |
|
license](https://creativecommons.org/share-your-work/public-domain/cc0/). |