ciempiess_light / README.md
carlosdanielhernandezmena's picture
Fixing a typo in citation
3d6afb2 verified
---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- es
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: 'CIEMPIESS LIGHT CORPUS: Audio and Transcripts of Mexican Spanish Broadcast
Conversations.'
tags:
- ciempiess
- spanish
- mexican spanish
- ciempiess project
- ciempiess-unam project
dataset_info:
config_name: ciempiess_light
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: speaker_id
dtype: string
- name: gender
dtype: string
- name: duration
dtype: float32
- name: normalized_text
dtype: string
splits:
- name: train
num_bytes: 1665852411.075
num_examples: 16663
download_size: 1122395917
dataset_size: 1665852411.075
configs:
- config_name: ciempiess_light
data_files:
- split: train
path: ciempiess_light/train-*
default: true
---
# Dataset Card for ciempiess_light
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CIEMPIESS-UNAM Project](https://ciempiess.org/)
- **Repository:** [CIEMPIESS LIGHT at LDC](https://catalog.ldc.upenn.edu/LDC2017S23)
- **Paper:** [CIEMPIESS: A New Open-Sourced Mexican Spanish Radio Corpus](http://www.lrec-conf.org/proceedings/lrec2014/pdf/182_Paper.pdf)
- **Point of Contact:** [Carlos Mena](mailto:carlos.mena@ciempiess.org)
### Dataset Summary
The CIEMPIESS LIGHT is a Radio Corpus designed to create acoustic models for automatic speech recognition and it is made up by recordings of spontaneous conversations in Mexican Spanish between a radio moderator and his guests. It is an enhanced version of the CIEMPIESS Corpus [(LDC item LDC2015S07)](https://catalog.ldc.upenn.edu/LDC2015S07).
CIEMPIESS LIGHT is "light" because it doesn't include much of the files of the first version of CIEMPIESS and it is "enhanced" because it has a lot of improvements, some of them suggested by our community of users, that make this version more convenient for modern speech recognition engines.
The CIEMPIESS LIGHT Corpus was created at the [Laboratorio de Teconologías del Lenguaje](https://labteclenguaje.wixsite.com/labteclenguaje/inicio) of the [Facultad de Ingeniería (FI)](https://www.ingenieria.unam.mx/) in the [Universidad Nacional Autónoma de México (UNAM)](https://www.unam.mx/) between 2015 and 2016 by Carlos Daniel Hernández Mena, supervised by José Abel Herrera Camacho, head of Laboratory.
CIEMPIESS is the acronym for:
"Corpus de Investigación en Español de México del Posgrado de Ingeniería Eléctrica y Servicio Social".
### Example Usage
The CIEMPIESS LIGHT contains only the train split:
```python
from datasets import load_dataset
ciempiess_light = load_dataset("ciempiess/ciempiess_light")
```
It is also valid to do:
```python
from datasets import load_dataset
ciempiess_light = load_dataset("ciempiess/ciempiess_light",split="train")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to test a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The language of the corpus is Spanish with the accent of Central Mexico.
## Dataset Structure
### Data Instances
```python
{
'audio_id': 'CMPL_F_32_11ANG_00003',
'audio': {
'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/5acd9ef350f022d5acb7f2a4f9de90371ffd5552c8d1bf849ca16a83e582fe4b/train/female/F_32/CMPL_F_32_11ANG_00003.flac',
'array': array([ 6.1035156e-05, -2.1362305e-04, -4.8828125e-04, ...,
3.3569336e-04, 6.1035156e-04, 0.0000000e+00], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': 'F_32',
'gender': 'female',
'duration': 3.256999969482422,
'normalized_text': 'estamos con el profesor javier estejel vargas'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `gender` (string) - gender of speaker (male or female)
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription
### Data Splits
The corpus counts just with the train split which has a total of 16663 speech files from 53 male speakers and 34 female speakers with a total duration of 18 hours and 25 minutes.
## Dataset Creation
### Curation Rationale
The CIEMPIESS LIGHT (CL) Corpus has the following characteristics:
* The CL has a total of 16663 audio files of 53 male speakers and 34 female speakers. It has a total duration of 18 hours and 25 minutes.
* The total number of audio files that come from male speakers is 12521 with a total duration of 12 hours and 41 minutes. The total number of audio files that come from female speakers is 4142 with a total duration of 5 hours and 44 minutes. So, CL is not balanced in gender.
* Every audio file in the CL has a duration between 2 and 10 seconds approximately.
* Data in CL is classified by gender and also by speaker, so one can easily select audios from a particular set of speakers to do experiments.
* Audio files in the CL and the first [CIEMPIESS](https://catalog.ldc.upenn.edu/LDC2015S07) are all of the same type. In both, speakers talk about legal and lawyer issues. They also talk about things related to the [UNAM University](https://www.unam.mx/) and the [Facultad de Derecho de la UNAM](https://www.derecho.unam.mx/).
* As in the first CIEMPIESS Corpus, transcriptions in the CL were made by humans.
* Speakers in the CL are not present in any other CIEMPIESS dataset.
* Audio files in the CL are distributed in a 16khz@16bit mono format.
### Source Data
#### Initial Data Collection and Normalization
The CIEMPIESS LIGHT is a Radio Corpus designed to train acoustic models of automatic speech recognition and it is made out of recordings of spontaneous conversations in Spanish between a radio moderator and his guests. These recordings were taken in mp3 from [PODCAST UNAM](http://podcast.unam.mx/) and they were created by [RADIO-IUS](http://www.derecho.unam.mx/cultura-juridica/radio.php) that is a radio station that belongs to [UNAM](https://www.unam.mx/) and by [Mirador Universitario](http://mirador.cuaed.unam.mx/) that is a TV program that also belongs to UNAM.
### Annotations
#### Annotation process
The annotation process is at follows:
* 1. A whole podcast is manually segmented keeping just the portions containing good quality speech.
* 2. A second pass os segmentation is performed; this time to separate speakers and put them in different folders.
* 3. The resulting speech files between 2 and 10 seconds are transcribed by students from different departments (computing, engineering, linguistics). Most of them are native speakers but not with a particular training as transcribers.
#### Who are the annotators?
The CIEMPIESS LIGHT Corpus was created by the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html) of the ["Facultad de Ingeniería"](https://www.ingenieria.unam.mx/) (FI) in the ["Universidad Nacional Autónoma de México"](https://www.unam.mx/) (UNAM) between 2015 and 2016 by Carlos Daniel Hernández Mena, head of the program.
### Personal and Sensitive Information
The dataset could contain names revealing the identity of some speakers; on the other side, the recordings come from publicly available podcasts, so, there is not a real intent of the participants to be anonymized. Anyway, you agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is valuable because it contains spontaneous speech.
### Discussion of Biases
The dataset is not gender balanced. It is comprised of 53 male speakers and 34 female speakers and the vocabulary is limited to legal issues.
### Other Known Limitations
"CIEMPIESS LIGHT CORPUS" by Carlos Daniel Hernández Mena and Abel Herrera is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
### Dataset Curators
The dataset was collected by students belonging to the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html). It was curated by [Carlos Daniel Hernández Mena](https://huggingface.co/carlosdanielhernandezmena) in 2016.
### Licensing Information
[CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```
@misc{carlosmenaciempiesslight2017,
title={CIEMPIESS LIGHT CORPUS: Audio and Transcripts of Mexican Spanish Broadcast Conversations.},
ldc_catalog_no={LDC2017S23},
DOI={https://doi.org/10.35111/64rg-yk97},
author={Hernandez Mena, Carlos Daniel and Herrera, Abel},
journal={Linguistic Data Consortium, Philadelphia},
year={2017},
url={https://catalog.ldc.upenn.edu/LDC2017S23},
}
```
### Contributions
The authors want to thank to Alejandro V. Mena, Elena Vera and Angélica Gutiérrez for their support to the social service program: "Desarrollo de Tecnologías del Habla." We also thank to the social service students for all the hard work.