mnazari's picture
progress
359246c
---
pretty_name: NENA Speech Dataset 1.0 (test)
annotations_creators:
- crowdsourced
- Geoffrey Khan
language_creators:
- crowdsourced
language:
- aii
- cld
- huy
- lsd
- trg
- aij
- bhn
- hrt
- kqd
- syn
license:
- cc0-1.0
multilinguality:
- multilingual
task_categories:
- automatic-speech-recognition
- text-to-speech
- translation
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
---
# Dataset Card for NENA Speech Dataset 1.0 (test)
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [How to Use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
<!-- - [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations) -->
- [Building the Dataset](#building-the-dataset)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
<!-- - [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations) -->
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## ⚠️ This is a temperary repository that will be replaced by end of 2023
## Dataset Summary
NENA Speech is a multimodal dataset to help teach machines how real people speak the Northeastern Neo-Aramaic (NENA) dialects.
The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.
NENA Speech consists of multimodal examples of speech of the NENA dialects. While all documented NENA dialects are included, not all have data yet, and some will never due to recent loss of their final speakers.
## Dataset Description
- **Homepage**: https://crowdsource.nenadb.dev/
- **Point of Contact:** [Matthew Nazari](mailto:matthewnazari@college.harvard.edu)
## Languages
The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.
Speakers of the Christian dialects call their language Assyrian and Chaldean in English. In their language these speakers use multiple different terms (e.g. suráy, sureth, ḥadiṯan, senaya). Speakers of the Jewish dialects call their language lišana deni, lišanət noshan, lišana nosha, lišana didan, all meaning "our language". Some names reflect the consciousness of it being a specifically Jewish language (e.g. lišan hozaye, hulaula).
NENA Speech has a subset for all of the over 150 NENA dialects. Not all dialects have examples available yet. Some dialects will never have examples available due to the loss of their final speakers in recent years.
## How to Use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, simply specify the corresponding language config name (e.g., "urmi (christian)" for the dialect of the Assyrian Christians of Urmi):
```python
from datasets import load_dataset
nena_speech = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
## Dataset Structure
### Data Instances
The NENA Speech dataset is a multimodal dataset that consists of three different kinds of examples:
1. **Unlabeled speech examples:** these contain audio of speech (`audio`) but no accompanying transcription (`transcription`) or translation (`translation`). This is useful for representation learning.
2. **Transcribed speech examples:** these contain both audio and transcription of speech. These are useful for machine learning tasks like automatic speech recognition and speech synthesis.
3. **Transcribed and translated speech examples:** these kinds of examples contain audio, transcription, and translation of speech. These are useful for tasks like multimodal translation.
Make sure to filter for the kinds of examples you need for your task before before using it.
```json
{
"transcription": "gu-mdìta.ˈ",
"translation": "in the town.",
"audio": {
"path": "et/train/nena_speech_0uk14ofpom196aj.mp3",
"array": array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
"sampling_rate": 48000
},
"locale": "IRN",
"proficiency": "proficient as mom",
"age": "70's",
"crowdsourced": true,
"unlabeled": true,
"interrupted": true,
"client_id": "gwurt1g1ln" ,
"path": "et/train/nena_speech_0uk14ofpom196aj.mp3",
}
```
### Data Fields
- `transcription (string)`: The transcription of what was spoken (e.g. `"beta"`)
- `translation (string)`: The translation of what was spoken in English (e.g. `"house"`)
- `audio (dict)`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
- `locale (string)`: The locale of the speaker
- `proficiency (string)`: The proficiency of the speaker
- `age (string)`: The age of the speaker (e.g. `"20's"`, `"50's"`, `"100+"`)
- `crowdsourced (bool)`: Indicates whether the example was crowdsourced as opposed to collected from existing language documentation resources
- `interrupted (bool)`: Indicates whether the example was interrupted with the speaker making sound effects or switching into another language
- `client_id (string)`: An id for which client (voice) made the recording
- `path (string)`: The path to the audio file
### Data Splits
The examples have been subdivided into three portions:
1. **dev:** the validation split (10%)
3. **test:** the test split (10%)
2. **train:** the train split (80%)
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Dataset Creation
<!-- ### Curation Rationale
[Needs More Information]
### Source Data
#### Language Documentation Resources
[Needs More Information]
#### Webscraping Facebook
[Needs More Information]
#### Crowdsourcing
[Needs More Information]
### Annotations
[Needs More Information] -->
### Building the Dataset
The NENA Speech dataset itself is built using `build.py`.
First, install the necessary requirements.
```
pip install -r requirements.txt
```
Next, build the dataset.
```
python build.py --build
```
Finally, push to the HuggingFace dataset repository.
## Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Data Preprocessing
The dataset consists of three different kinds of examples (see [Data Instances](#data-instances)).
Make sure to filter for the kinds of examples you need for your task before before using it. For example, for automatic speech recognition you will want to filter for examples with transcriptions.
In most tasks, you will want to filter out examples that are interrupted (e.g. by the speaker making sound effects, by the speaker switching into a another language).
```python
from datasets import load_dataset
ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
def filter_for_asr(example):
return example['transcription'] and not example['interrupted']
ds = ds.filter(filter_for_asr, desc="filter dataset")
```
Transcriptions include markers of linguistic and acoustic features which may be removed in certain tasks (e.g. word stress, nuclear stress, intonation group markers, vowel length).
```python
from datasets import load_dataset
ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
def prepare_dataset(batch):
chars_to_remove = ['ˈ', '̀', '́', '̄', '̆', '.', ',', '?', '!']
for char in chars_to_remove:
batch["transcription"] = batch["transcription"].replace(char, "")
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
<!-- ## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information] -->
## Additional Information
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/).
### Citation Information
This work has not yet been published.