carlosdanielhernandezmena commited on
Commit
fcffff1
1 Parent(s): 2787525

Adding all the files for the first time including the README

Browse files
README.md CHANGED
@@ -1,3 +1,204 @@
1
  ---
2
- license: cc-by-sa-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - es
6
+ language_creators:
7
+ - other
8
+ license:
9
+ - cc-by-sa-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: 'CIEMPIESS BALANCE CORPUS: Audio and Transcripts of Mexican Spanish Broadcast Conversations.'
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ tags:
18
+ - ciempiess
19
+ - spanish
20
+ - mexican spanish
21
+ - ciempiess project
22
+ - ciempiess-unam project
23
+ task_categories:
24
+ - automatic-speech-recognition
25
+ task_ids: []
26
  ---
27
+
28
+ # Dataset Card for ciempiess_balance
29
+ ## Table of Contents
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-fields)
37
+ - [Data Splits](#data-splits)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Annotations](#annotations)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+ - [Contributions](#contributions)
52
+
53
+ ## Dataset Description
54
+ - **Homepage:** [CIEMPIESS-UNAM Project](http://www.ciempiess.org/)
55
+ - **Repository:** [CIEMPIESS BALANCE at LDC](https://catalog.ldc.upenn.edu/LDC2018S11)
56
+ - **Point of Contact:** [Carlos Mena](mailto:carlos.mena@ciempiess.org)
57
+
58
+ ### Dataset Summary
59
+
60
+ The CIEMPIESS BALANCE Corpus is designed to match with the [CIEMPIESS LIGHT](https://huggingface.co/datasets/ciempiess/ciempiess_light) Corpus [(LDC2017S23)](https://catalog.ldc.upenn.edu/LDC2017S23). So, "Balance" means that if the CIEMPIESS BALANCE is combined with the CIEMPIESS LIGHT, one will get a gender balanced corpus. To appreciate this, one need to know that the CIEMPIESS LIGHT is by itself, a gender unbalanced corpus of approximately 25% of female speakers and 75% of male speakers. So, the CIEMPIESS BALANCE is a gender unbalanced corpus with approximately 25% of male speakers and 75% of female speakers.
61
+
62
+ Furthermore, the match between the two datasets is more profound than just the number of the speakers. In both corpus speakers are numbered as: F_01, M_01, F_02, M_02, etc. So, the relation between the speakers is that the speech of F_01 in CIEMPIES LIGHT has an approximate amount of time as the speech of M_01 in the CIEMPIESS BALANCE.
63
+
64
+ The consequence of this speaker-to-speaker match is that the CIEMPIESS BALANCE has a size of 18 hours and 20 minutes against the 18 hours and 25 minutes of the CIEMPIESS LIGHT. It is a very good match between them!
65
+
66
+ CIEMPIESS is the acronym for:
67
+
68
+ "Corpus de Investigación en Español de México del Posgrado de Ingeniería Eléctrica y Servicio Social".
69
+
70
+ ### Example Usage
71
+ The CIEMPIESS BALANCE contains only the train split:
72
+ ```python
73
+ from datasets import load_dataset
74
+ ciempiess_balance = load_dataset("ciempiess/ciempiess_balance")
75
+ ```
76
+ It is also valid to do:
77
+ ```python
78
+ from datasets import load_dataset
79
+ ciempiess_balance = load_dataset("ciempiess/ciempiess_balance",split="train")
80
+ ```
81
+
82
+ ### Supported Tasks
83
+ automatic-speech-recognition: The dataset can be used to test a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
84
+
85
+ ### Languages
86
+ The language of the corpus is Spanish with the accent of Central Mexico.
87
+
88
+ ## Dataset Structure
89
+
90
+ ### Data Instances
91
+ ```python
92
+ {
93
+ 'audio_id': 'CMPB_F_41_01CAR_00011',
94
+ 'audio': {
95
+ 'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/6564823bd50fe590ce15086c22ddf7efe2302a8f988f12469f61940f2b88c051/train/female/F_41/CMPB_F_41_01CAR_00011.flac',
96
+ 'array': array([0.00283813, 0.00442505, 0.00720215, ..., 0.00543213, 0.00570679,
97
+ 0.00952148], dtype=float32), 'sampling_rate': 16000
98
+ },
99
+ 'speaker_id': 'F_41',
100
+ 'gender': 'female',
101
+ 'duration': 7.519000053405762,
102
+ 'normalized_text': 'entonces mira oye pasa esto y tú así de ay pues déjame leer porque ni sé no así pasa porque pues'
103
+ }
104
+ ```
105
+
106
+ ### Data Fields
107
+ * `audio_id` (string) - id of audio segment
108
+ * `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
109
+ * `speaker_id` (string) - id of speaker
110
+ * `gender` (string) - gender of speaker (male or female)
111
+ * `duration` (float32) - duration of the audio file in seconds.
112
+ * `normalized_text` (string) - normalized audio segment transcription
113
+
114
+ ### Data Splits
115
+
116
+ The corpus counts just with the train split which has a total of 8555 speech files from 53 female speakers and 34 male speakers with a total duration of 18 hours and 20 minutes.
117
+
118
+ ## Dataset Creation
119
+
120
+ ### Curation Rationale
121
+
122
+ The CIEMPIESS BALANCE (CB) Corpus has the following characteristics:
123
+
124
+ * The CB has a total of 8555 audio files of 53 female speakers and 34 male speakers. It has a total duration of 18 hours and 20 minutes.
125
+
126
+ * The total number of audio files that come from male speakers is 2447 with a total duration of 5 hours and 40 minutes. The total number of audio files that come from female speakers is 6108 with a total duration of 12 hours and 40 minutes.
127
+
128
+ * Every audio file in the CB has a duration between 5 and 10 seconds approximately.
129
+
130
+ * Speakers in the CB and the CIEMPIESS LIGHT (CL) are different persons. In fact, speakers in the CB are not present in any other CIEMPIESS dataset.
131
+
132
+ * The CL is slightly bigger (18 hours / 25 minutes) than the CB (18 hours / 20 minutes).
133
+
134
+ * Data in CB is classified by gender and also by speaker, so one can easily select audios from a particular set of speakers to do experiments.
135
+
136
+ * Audio files in the CL and the CB are all of the same type. In both, speakers talk about legal and lawyer issues. They also talk about things related to the [UNAM University](https://www.unam.mx/) and the ["Facultad de Derecho de la UNAM"](https://www.derecho.unam.mx/).
137
+
138
+ * As in the CL, transcriptions in the CB were made by humans.
139
+
140
+ * Audio files in the CB are distributed in a 16khz@16bit mono format.
141
+
142
+ ### Source Data
143
+
144
+ #### Initial Data Collection and Normalization
145
+
146
+ The CIEMPIESS BALANCE is a Radio Corpus designed to train acoustic models of automatic speech recognition and it is made out of recordings of spontaneous conversations in Spanish between a radio moderator and his guests. Most of the speech in these conversations has the accent of Central Mexico.
147
+
148
+ All the recordings that constitute the CIEMPIESS BALANCE come from [RADIO-IUS](https://www.derecho.unam.mx/cultura-juridica/radio.php), a radio station belonging to [UNAM](https://www.unam.mx/). Recordings were donated by Lic. Cesar Gabriel Alanis Merchand and Mtro. Ricardo Rojas Arevalo from the [Facultad de Derecho de la UNAM](https://www.derecho.unam.mx/) with the condition that they have to be used for academic and research purposes only.
149
+
150
+ ### Annotations
151
+
152
+ #### Annotation process
153
+
154
+ The annotation process is at follows:
155
+
156
+ * 1. A whole podcast is manually segmented keeping just the portions containing good quality speech.
157
+ * 2. A second pass os segmentation is performed; this time to separate speakers and put them in different folders.
158
+ * 3. The resulting speech files between 2 and 10 seconds are transcribed by students from different departments (computing, engineering, linguistics). Most of them are native speakers but not with a particular training as transcribers.
159
+
160
+ #### Who are the annotators?
161
+
162
+ The CIEMPIESS BALANCE Corpus was created by the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html) of the ["Facultad de Ingeniería"](https://www.ingenieria.unam.mx/) (FI) in the ["Universidad Nacional Autónoma de México"](https://www.unam.mx/) (UNAM) between 2016 and 2018 by Carlos Daniel Hernández Mena, head of the program.
163
+
164
+ ### Personal and Sensitive Information
165
+
166
+ The dataset could contain names revealing the identity of some speakers; on the other side, the recordings come from publicly available podcasts, so, there is not a real intent of the participants to be anonymized. Anyway, you agree to not attempt to determine the identity of speakers in this dataset.
167
+
168
+ ## Considerations for Using the Data
169
+
170
+ ### Social Impact of Dataset
171
+
172
+ This dataset is valuable because it contains spontaneous speech.
173
+
174
+ ### Discussion of Biases
175
+
176
+ The dataset is not gender balanced. It is comprised of 53 female speakers and 34 male speakers and the vocabulary is limited to legal issues.
177
+
178
+ ### Other Known Limitations
179
+
180
+ "CIEMPIESS BALANCE CORPUS" by Carlos Daniel Hernández Mena is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
181
+
182
+ ### Dataset Curators
183
+
184
+ The dataset was collected by students belonging to the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html). It was curated by [Carlos Daniel Hernández Mena](https://huggingface.co/carlosdanielhernandezmena) in 2018.
185
+
186
+ ### Licensing Information
187
+ [CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)
188
+
189
+ ### Citation Information
190
+ ```
191
+ @misc{carlosmenaciempiessbalance2018,
192
+ title={CIEMPIESS BALANCE CORPUS: Audio and Transcripts of Mexican Spanish Broadcast Conversations.},
193
+ ldc_catalog_no={LDC2018S11},
194
+ DOI={https://doi.org/10.35111/rfmw-n126},
195
+ author={Hernandez Mena, Carlos Daniel},
196
+ journal={Linguistic Data Consortium, Philadelphia},
197
+ year={2018},
198
+ url={https://catalog.ldc.upenn.edu/LDC2018S11},
199
+ }
200
+ ```
201
+ ### Contributions
202
+
203
+ The authors want to thank to Alejandro V. Mena, Elena Vera and Angélica Gutiérrez for their support to the social service program: "Desarrollo de Tecnologías del Habla." We also thank to the social service students for all the hard work.
204
+
ciempiess_balance.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from collections import defaultdict
2
+ import os
3
+ import json
4
+ import csv
5
+
6
+ import datasets
7
+
8
+ _NAME="ciempiess_balance"
9
+ _VERSION="1.0.0"
10
+ _AUDIO_EXTENSIONS=".flac"
11
+
12
+ _DESCRIPTION = """
13
+ CIEMPIESS BALANCE is a Radio Corpus designed to create acoustic models for automatic speech recognition. It is "balance" because it is designed to balance the CIEMPIESS LIGHT, which means that if both corpora are combined, one will get a gender balanced corpus.
14
+ """
15
+
16
+ _CITATION = """
17
+ @misc{carlosmenaciempiessbalance2018,
18
+ title={CIEMPIESS BALANCE CORPUS: Audio and Transcripts of Mexican Spanish Broadcast Conversations.},
19
+ ldc_catalog_no={LDC2018S11},
20
+ DOI={https://doi.org/10.35111/rfmw-n126},
21
+ author={Hernandez Mena, Carlos Daniel},
22
+ journal={Linguistic Data Consortium, Philadelphia},
23
+ year={2018},
24
+ url={https://catalog.ldc.upenn.edu/LDC2018S11},
25
+ }
26
+ """
27
+
28
+ _HOMEPAGE = "https://catalog.ldc.upenn.edu/LDC2018S11"
29
+
30
+ _LICENSE = "CC-BY-SA-4.0, See https://creativecommons.org/licenses/by-sa/4.0/"
31
+
32
+ _BASE_DATA_DIR = "corpus/"
33
+ _METADATA_TRAIN = os.path.join(_BASE_DATA_DIR,"files", "metadata_train.tsv")
34
+
35
+ _TARS_TRAIN = os.path.join(_BASE_DATA_DIR,"files", "tars_train.paths")
36
+
37
+ class CiempiessBalanceConfig(datasets.BuilderConfig):
38
+ """BuilderConfig for CIEMPIESS BALANCE Corpus"""
39
+
40
+ def __init__(self, name, **kwargs):
41
+ name=_NAME
42
+ super().__init__(name=name, **kwargs)
43
+
44
+ class CiempiessBalance(datasets.GeneratorBasedBuilder):
45
+ """CIEMPIESS BALANCE Corpus"""
46
+
47
+ VERSION = datasets.Version(_VERSION)
48
+ BUILDER_CONFIGS = [
49
+ CiempiessBalanceConfig(
50
+ name=_NAME,
51
+ version=datasets.Version(_VERSION),
52
+ )
53
+ ]
54
+
55
+ def _info(self):
56
+ features = datasets.Features(
57
+ {
58
+ "audio_id": datasets.Value("string"),
59
+ "audio": datasets.Audio(sampling_rate=16000),
60
+ "speaker_id": datasets.Value("string"),
61
+ "gender": datasets.Value("string"),
62
+ "duration": datasets.Value("float32"),
63
+ "normalized_text": datasets.Value("string"),
64
+ }
65
+ )
66
+ return datasets.DatasetInfo(
67
+ description=_DESCRIPTION,
68
+ features=features,
69
+ homepage=_HOMEPAGE,
70
+ license=_LICENSE,
71
+ citation=_CITATION,
72
+ )
73
+
74
+ def _split_generators(self, dl_manager):
75
+
76
+ metadata_train=dl_manager.download_and_extract(_METADATA_TRAIN)
77
+
78
+ tars_train=dl_manager.download_and_extract(_TARS_TRAIN)
79
+
80
+ hash_tar_files=defaultdict(dict)
81
+
82
+ with open(tars_train,'r') as f:
83
+ hash_tar_files['train']=[path.replace('\n','') for path in f]
84
+
85
+ hash_meta_paths={"train":metadata_train}
86
+ audio_paths = dl_manager.download(hash_tar_files)
87
+
88
+ splits=["train"]
89
+ local_extracted_audio_paths = (
90
+ dl_manager.extract(audio_paths) if not dl_manager.is_streaming else
91
+ {
92
+ split:[None] * len(audio_paths[split]) for split in splits
93
+ }
94
+ )
95
+
96
+ return [
97
+ datasets.SplitGenerator(
98
+ name=datasets.Split.TRAIN,
99
+ gen_kwargs={
100
+ "audio_archives": [dl_manager.iter_archive(archive) for archive in audio_paths["train"]],
101
+ "local_extracted_archives_paths": local_extracted_audio_paths["train"],
102
+ "metadata_paths": hash_meta_paths["train"],
103
+ }
104
+ ),
105
+ ]
106
+
107
+ def _generate_examples(self, audio_archives, local_extracted_archives_paths, metadata_paths):
108
+
109
+ features = ["speaker_id","gender","duration","normalized_text"]
110
+
111
+ with open(metadata_paths) as f:
112
+ metadata = {x["audio_id"]: x for x in csv.DictReader(f, delimiter="\t")}
113
+
114
+ for audio_archive, local_extracted_archive_path in zip(audio_archives, local_extracted_archives_paths):
115
+ for audio_filename, audio_file in audio_archive:
116
+ audio_id = audio_filename.split(os.sep)[-1].split(_AUDIO_EXTENSIONS)[0]
117
+ path = os.path.join(local_extracted_archive_path, audio_filename) if local_extracted_archive_path else audio_filename
118
+
119
+ yield audio_id, {
120
+ "audio_id": audio_id,
121
+ **{feature: metadata[audio_id][feature] for feature in features},
122
+ "audio": {"path": path, "bytes": audio_file.read()},
123
+ }
corpus/files/metadata_train.tsv ADDED
The diff for this file is too large to render. See raw diff
 
corpus/files/tars_train.paths ADDED
@@ -0,0 +1 @@
 
 
1
+ corpus/speech/train.tar.gz
corpus/speech/train.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83380e26879435b313ec9d197cf9d4d6f7ee59e016e13da81b85c28a6b878f71
3
+ size 1295843169