Beomseok-LEE commited on
Commit
9e8af76
·
verified ·
1 Parent(s): b6200b0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +271 -3
README.md CHANGED
@@ -1,3 +1,271 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ source_datasets:
7
+ - extended
8
+ language:
9
+ - ar
10
+ - de
11
+ - es
12
+ - fr
13
+ - hu
14
+ - ko
15
+ - nl
16
+ - pl
17
+ - pt
18
+ - ru
19
+ - tr
20
+ - vi
21
+ license:
22
+ - cc-by-nc-sa-4.0
23
+ multilinguality:
24
+ - multilingual
25
+ size_categories:
26
+ - 10K<n<100K
27
+ task_categories:
28
+ - audio-classification
29
+ - text-classification
30
+ - zero-shot-classification
31
+ - automatic-speech-recognition
32
+ task_ids: []
33
+ pretty_name: 'A Multilingual Speech Dataset for SLU and Beyond'
34
+ tags:
35
+ - spoken-language-understanding
36
+ - speech-translation
37
+ - speaker identification
38
+ ---
39
+
40
+ # Speech-MASSIVE
41
+
42
+ ## Dataset Description
43
+ Speech-MASSIVE is a multilingual Spoken Language Understanding (SLU) dataset comprising the speech counterpart for a portion of the [MASSIVE](https://arxiv.org/abs/2204.08582) textual corpus. Speech-MASSIVE covers 12 languages (Arabic, German, Spanish, French, Hungarian, Korean, Dutch, Polish, European Portuguese, Russian, Turkish, and Vietnamese) from different families and inherits from MASSIVE the annotations for the intent prediction and slot-filling tasks. MASSIVE utterances' labels span 18 domains, with 60 intents and 55 slots. Full train split is provided for French and German, and for all the 12 languages (including French and German), we provide few-shot train, dev, test splits. Few-shot train (115 examples) covers all 18 domains, 60 intents, and 55 slots (including empty slots).
44
+
45
+ Our extension is prompted by the scarcity of massively multilingual SLU datasets and the growing need for versatile speech datasets to assess foundation models (LLMs, speech encoders) across diverse languages and tasks. To facilitate speech technology advancements, we release Speech-MASSIVE publicly available with [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
46
+
47
+ Speech-MASSIVE is accepted at INTERSPEECH 2024 (Kos, GREECE).
48
+
49
+
50
+ ## Dataset Summary
51
+ - `dev`: dev split available for all the 12 languages
52
+ - `test`: test split available for all the 12 languages
53
+ - `train_115`: few-shot split available for all the 12 languages (all 115 samples are cross-lingually aligned)
54
+ - `train`: train split available for French (fr-FR) and German (de-DE)
55
+
56
+ | lang | split | # sample | # hrs | total # spk </br>(Male/Female/Unidentified) |
57
+ |:---:|:---:|:---:|:---:|:---:|
58
+ | ar-SA | dev | 2033 | 2.12 | 36 (22/14/0) |
59
+ | | test | 2974 | 3.23 | 37 (15/17/5) |
60
+ | | train_115 | 115 | 0.14 | 8 (4/4/0) |
61
+ | de-DE | dev | 2033 | 2.33 | 68 (35/32/1) |
62
+ | | test | 2974 | 3.41 | 82 (36/36/10) |
63
+ | | train | 11514 | 12.61 | 117 (50/63/4) |
64
+ | | train_115 | 115 | 0.15 | 7 (3/4/0) |
65
+ | es-ES | dev | 2033 | 2.53 | 109 (51/53/5) |
66
+ | | test | 2974 | 3.61 | 85 (37/33/15) |
67
+ | | train_115 | 115 | 0.13 | 7 (3/4/0) |
68
+ | fr-FR | dev | 2033 | 2.20 | 55 (26/26/3) |
69
+ | | test | 2974 | 2.65 | 75 (31/35/9) |
70
+ | | train | 11514 | 12.42 | 103 (50/52/1) |
71
+ | | train_115 | 115 | 0.12 | 103 (50/52/1) |
72
+ | hu-HU | dev | 2033 | 2.27 | 69 (33/33/3) |
73
+ | | test | 2974 | 3.30 | 55 (25/24/6) |
74
+ | | train_115 | 115 | 0.12 | 8 (3/4/1) |
75
+ | ko-KR | dev | 2033 | 2.12 | 21 (8/13/0) |
76
+ | | test | 2974 | 2.66 | 31 (10/18/3) |
77
+ | | train_115 | 115 | 0.14 | 8 (4/4/0) |
78
+ | nl-NL | dev | 2033 | 2.14 | 37 (17/19/1) |
79
+ | | test | 2974 | 3.30 | 100 (48/49/3) |
80
+ | | train_115 | 115 | 0.12 | 7 (3/4/0) |
81
+ | pl-PL | dev | 2033 | 2.24 | 105 (50/52/3) |
82
+ | | test | 2974 | 3.21 | 151 (73/71/7) |
83
+ | | train_115 | 115 | 0.10 | 7 (3/4/0) |
84
+ | pt-PT | dev | 2033 | 2.20 | 107 (51/53/3) |
85
+ | | test | 2974 | 3.25 | 102 (48/50/4) |
86
+ | | train_115 | 115 | 0.12 | 8 (4/4/0) |
87
+ | ru-RU | dev | 2033 | 2.25 | 40 (7/31/2) |
88
+ | | test | 2974 | 3.44 | 51 (25/23/3) |
89
+ | | train_115 | 115 | 0.12 | 7 (3/4/0) |
90
+ | tr-TR | dev | 2033 | 2.17 | 71 (36/34/1) |
91
+ | | test | 2974 | 3.00 | 42 (17/18/7) |
92
+ | | train_115 | 115 | 0.11 | 6 (3/3/0) |
93
+ | vi-VN | dev | 2033 | 2.10 | 28 (13/14/1) |
94
+ | | test | 2974 | 3.23 | 30 (11/14/5) |
95
+ || train_115 | 115 | 0.11 | 7 (2/4/1) |
96
+
97
+
98
+ ## How to use
99
+
100
+ ### How to use
101
+
102
+ The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
103
+
104
+ For example, to download the French config, simply specify the corresponding language config name (i.e., "fr-FR" for French):
105
+
106
+ ```python
107
+ from datasets import load_dataset
108
+
109
+ speech_massive_fr_train = load_dataset("FBK-MT/Speech-MASSIVE", "fr-FR", split="train", trust_remote_code=True)
110
+ ```
111
+
112
+ In case you don't have enough space in the machine, you can stream dataset by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
113
+ ```python
114
+ from datasets import load_dataset
115
+
116
+ speech_massive_de_train = load_dataset("FBK-MT/Speech-MASSIVE", "de-DE", split="train", streaming=True, trust_remote_code=True)
117
+ list(speech_massive_de_train.take(2))
118
+ ```
119
+
120
+ You can also load all the available languages and splits at once.
121
+ And then access each split.
122
+ ```python
123
+ from datasets import load_dataset
124
+
125
+ speech_massive = load_dataset("FBK-MT/Speech-MASSIVE", "all", trust_remote_code=True)
126
+ multilingual_validation = speech_massive['validation']
127
+ ```
128
+
129
+ Or you can load dataset's all the splits per language to separate languages more easily.
130
+ ```python
131
+ from datasets import load_dataset, interleave_datasets, concatenate_datasets
132
+
133
+ # creating full train set by interleaving between German and French
134
+ speech_massive_de = load_dataset("FBK-MT/Speech-MASSIVE", "de-DE", trust_remote_code=True)
135
+ speech_massive_fr = load_dataset("FBK-MT/Speech-MASSIVE", "fr-FR", trust_remote_code=True)
136
+ speech_massive_train_de_fr = interleave_datasets([speech_massive_de['train'], speech_massive_fr['train']])
137
+
138
+ # creating train_115 few-shot set by concatenating Korean and Russian
139
+ speech_massive_ko = load_dataset("FBK-MT/Speech-MASSIVE", "ko-KR", trust_remote_code=True)
140
+ speech_massive_ru = load_dataset("FBK-MT/Speech-MASSIVE", "ru-RU", trust_remote_code=True)
141
+ speech_massive_train_115_ko_ru = concatenate_datasets([speech_massive_ko['train_115'], speech_massive_ru['train_115']])
142
+ ```
143
+
144
+ ## Dataset Structure
145
+
146
+ ### Data configs
147
+ - `all`: load all the 12 languages in one single dataset instance
148
+ - `lang`: load only `lang` in the dataset instance, by specifying one of below languages
149
+ - ```ar-SA, de-DE, es-ES, fr-FR, hu-HU, ko-KR, nl-NL, pl-PL, pt-PT, ru-RU, tr-TR, vi-VN```
150
+
151
+ ### Data Splits
152
+ - `validation`: validation(dev) split available for all the 12 languages
153
+ - `train_115`: few-shot (115 samples) split available for all the 12 languages
154
+ - `train`: train split available for French (fr-FR) and German (de-DE)
155
+
156
+ > [!WARNING]
157
+ > `test` split is uploaded as a separate dataset on HF to prevent possible data contamination
158
+ - ⚠️ `test`: available **_only_** in [the separate HF dataset repository]() ⚠️
159
+
160
+ ### Data Instances
161
+
162
+ ```json
163
+ {
164
+ // Start of the data collected in Speech-MASSIVE
165
+ 'audio': {
166
+ 'path': 'train/2b12a21ca64a729ccdabbde76a8f8d56.wav',
167
+ 'array': array([-7.80913979e-...7259e-03]),
168
+ 'sampling_rate': 16000},
169
+ 'path': '/path/to/wav/file.wav',
170
+ 'is_transcript_reported': False,
171
+ 'is_validated': True,
172
+ 'speaker_id': '60fcc09cb546eee814672f44',
173
+ 'speaker_sex': 'Female',
174
+ 'speaker_age': '25',
175
+ 'speaker_ethnicity_simple': 'White',
176
+ 'speaker_country_of_birth': 'France',
177
+ 'speaker_country_of_residence': 'Ireland',
178
+ 'speaker_nationality': 'France',
179
+ 'speaker_first_language': 'French',
180
+ // End of the data collected in Speech-MASSIVE
181
+
182
+ // Start of the data extracted from MASSIVE
183
+ // (https://huggingface.co/datasets/AmazonScience/massive/blob/main/README.md#data-instances)
184
+ 'id': '7509',
185
+ 'locale': 'fr-FR',
186
+ 'partition': 'train',
187
+ 'scenario': 2,
188
+ 'scenario_str': 'calendar',
189
+ 'intent_idx': 32,
190
+ 'intent_str': 'calendar_query',
191
+ 'utt': 'après les cours de natation quoi d autre sur mon calendrier mardi',
192
+ 'annot_utt': 'après les cours de natation quoi d autre sur mon calendrier [date : mardi]',
193
+ 'worker_id': '22',
194
+ 'slot_method': {'slot': ['date'], 'method': ['translation']},
195
+ 'judgments': {
196
+ 'worker_id': ['22', '19', '0'],
197
+ 'intent_score': [1, 2, 1],
198
+ 'slots_score': [1, 1, 1],
199
+ 'grammar_score': [4, 4, 4],
200
+ 'spelling_score': [2, 1, 2],
201
+ 'language_identification': ['target', 'target', 'target']
202
+ },
203
+ 'tokens': ['après', 'les', 'cours', 'de', 'natation', 'quoi', 'd', 'autre', 'sur', 'mon', 'calendrier', 'mardi'],
204
+ 'labels': ['Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'Other', 'date'],
205
+ // End of the data extracted from MASSIVE
206
+ }
207
+ ```
208
+ ### Data Fields
209
+
210
+ `audio.path`: Original audio file name
211
+
212
+ `audio.array`: Read audio file with the sampling rate of 16,000
213
+
214
+ `audio.sampling_rate`: Sampling rate
215
+
216
+ `path`: Original audio file full path
217
+
218
+ `is_transcript_reported`: Whether the transcript is reported as 'syntatically wrong' by crowd-source worker
219
+
220
+ `is_validated`: Whether the recorded audio has been validated to check if the audio matches transcript exactly by crowd-source worker
221
+
222
+ `speaker_id`: Unique hash id of the crowd source speaker
223
+
224
+ `speaker_sex`: Speaker's sex information provided by the crowd-source platform ([Prolific](http://prolific.com))
225
+ - Male
226
+ - Female
227
+ - Unidentified : Information not available from Prolific
228
+
229
+ `speaker_age`: Speaker's age information provided by Prolific
230
+ - age value (`str`)
231
+ - Unidentified : Information not available from Prolific
232
+
233
+ `speaker_ethnicity_simple`: Speaker's ethnicity information provided by Prolific
234
+ - ethnicity value (`str`)
235
+ - Unidentified : Information not available from Prolific
236
+
237
+ `speaker_country_of_birth`: Speaker's country of birth information provided by Prolific
238
+ - country value (`str`)
239
+ - Unidentified : Information not available from Prolific
240
+
241
+ `speaker_country_of_residence`: Speaker's country of residence information provided by Prolific
242
+ - country value (`str`)
243
+ - Unidentified : Information not available from Prolific
244
+
245
+ `speaker_nationality`: Speaker's nationality information provided by Prolific
246
+ - nationality value (`str`)
247
+ - Unidentified : Information not available from Prolific
248
+
249
+ `speaker_first_language`: Speaker's first language information provided by Prolific
250
+ - language value (`str`)
251
+ - Unidentified : Information not available from Prolific
252
+
253
+
254
+ ### Limitations
255
+
256
+ As Speech-MASSIVE is constructed based on the MASSIVE dataset, it inherently retains certain grammatical errors present in the original MASSIVE text. Correcting these errors was outside the scope of our project. However, by providing the `is_transcripted_reported` attribute in Speech-MASSIVE, we enable users of the dataset to be aware of these errors.
257
+
258
+
259
+ ## License
260
+
261
+ All datasets are licensed under the [CC-BY-NC-SA-4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en).
262
+
263
+ ### Citation Information
264
+
265
+ You can access the Speech-MASSIVE paper at [link to be added](https://arxiv.com).
266
+ Please cite the paper when referencing the Speech-MASSIVE corpus as:
267
+
268
+ ```
269
+ Citation to be added
270
+ ```
271
+