Datasets:
Tasks:
Automatic Speech Recognition
Sub-tasks:
keyword-spotting
Size:
10K<n<100K
ArXiv:
Tags:
speech-recognition
License:
patrickvonplaten
commited on
Commit
•
a0ee25d
1
Parent(s):
0349b76
Update README.md
Browse files
README.md
CHANGED
@@ -40,245 +40,23 @@ task_ids:
|
|
40 |
|
41 |
## Dataset Description
|
42 |
|
43 |
-
- **Fine-Tuning script:** [research-projects/xtreme-s](https://github.com/huggingface/transformers/tree/
|
44 |
-
- **Paper:** [
|
45 |
-
- **
|
46 |
-
- **FLEURS amount of disk used:** 350 GB
|
47 |
-
- **Multilingual Librispeech amount of disk used:** 2700 GB
|
48 |
-
- **Voxpopuli amount of disk used:** 400 GB
|
49 |
-
- **Covost2 amount of disk used:** 70 GB
|
50 |
-
- **Minds14 amount of disk used:** 5 GB
|
51 |
-
- **Total amount of disk used:** ca. 3500 GB
|
52 |
|
53 |
-
|
|
|
54 |
|
55 |
-
|
56 |
-
An easy-to-use and flexible fine-tuning script is provided and actively maintained.***
|
57 |
|
58 |
-
|
59 |
-
|
60 |
-
- **Western Europe**: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh*
|
61 |
-
- **Eastern Europe**: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*
|
62 |
-
- **Central-Asia/Middle-East/North-Africa**: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*
|
63 |
-
- **Sub-Saharan Africa**: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*
|
64 |
-
- **South-Asia**: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*
|
65 |
-
- **South-East Asia**: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*
|
66 |
-
- **CJK languages**: *Cantonese and Mandarin Chinese, Japanese, Korean*
|
67 |
-
|
68 |
-
|
69 |
-
## Design principles
|
70 |
-
|
71 |
-
### Diversity
|
72 |
-
|
73 |
-
XTREME-S aims for task, domain and language
|
74 |
-
diversity. Tasks should be diverse and cover several domains to
|
75 |
-
provide a reliable evaluation of model generalization and
|
76 |
-
robustness to noisy naturally-occurring speech in different
|
77 |
-
environments. Languages should be diverse to ensure that
|
78 |
-
models can adapt to a wide range of linguistic and phonological
|
79 |
-
phenomena.
|
80 |
-
|
81 |
-
### Accessibility
|
82 |
-
|
83 |
-
The sub-dataset for each task can be downloaded
|
84 |
-
with a **single line of code** as shown in [Supported Tasks](#supported-tasks).
|
85 |
-
Each task is available under a permissive license that allows the use and redistribution
|
86 |
-
of the data for research purposes. Tasks have been selected based on their usage by
|
87 |
-
pre-existing multilingual pre-trained models, for simplicity.
|
88 |
-
|
89 |
-
### Reproducibility
|
90 |
-
|
91 |
-
We produce fully **open-sourced, maintained and easy-to-use** fine-tuning scripts
|
92 |
-
for each task as shown under [Fine-tuning Example](#fine-tuning-and-evaluation-example).
|
93 |
-
XTREME-S encourages submissions that leverage publicly available speech and text datasets. Users should detail which data they use.
|
94 |
-
In general, we encourage settings that can be reproduced by the community, but also encourage the exploration of new frontiers for speech representation learning.
|
95 |
-
|
96 |
-
## Fine-tuning and Evaluation Example
|
97 |
-
|
98 |
-
We provide a fine-tuning script under [**research-projects/xtreme-s**](https://github.com/huggingface/transformers/tree/master/examples/research_projects/xtreme-s).
|
99 |
-
The fine-tuning script is written in PyTorch and allows one to fine-tune and evaluate any [Hugging Face model](https://huggingface.co/models) on XTREME-S.
|
100 |
-
The example script is actively maintained by [@anton-l](https://github.com/anton-l) and [@patrickvonplaten](https://github.com/patrickvonplaten). Feel free
|
101 |
-
to reach out via issues or pull requests on GitHub if you have any questions.
|
102 |
-
|
103 |
-
## Leaderboards
|
104 |
-
|
105 |
-
The leaderboard for the XTREME-S benchmark can be found at [this address (TODO(PVP))]().
|
106 |
-
|
107 |
-
## Supported Tasks
|
108 |
-
|
109 |
-
Note that the suppoprted tasks are focused particularly on linguistic aspect of speech,
|
110 |
-
while nonlinguistic/paralinguistic aspects of speech relevant to e.g. speech synthesis or voice conversion are **not** evaluated.
|
111 |
-
|
112 |
-
<p align="center">
|
113 |
-
<img src="https://github.com/patrickvonplaten/scientific_images/raw/master/xtreme_s.png" alt="Datasets used in XTREME"/>
|
114 |
-
</p>
|
115 |
-
|
116 |
-
### 1. Speech Recognition (ASR)
|
117 |
-
|
118 |
-
We include three speech recognition datasets: FLEURS-ASR, MLS and VoxPopuli (optionally BABEL). Multilingual fine-tuning is used for these three datasets.
|
119 |
-
|
120 |
-
#### FLEURS-ASR
|
121 |
-
|
122 |
-
*FLEURS-ASR* is a new dataset that provides n-way parallel speech data in 102 languages with transcriptions.
|
123 |
-
|
124 |
-
TODO(PVP) - need more information here
|
125 |
-
|
126 |
-
```py
|
127 |
-
from datasets import load_dataset
|
128 |
-
|
129 |
-
fleurs_asr = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans
|
130 |
-
# to download all data for multi-lingual fine-tuning uncomment following line
|
131 |
-
# fleurs_asr = load_dataset("google/xtreme_s", "fleurs.all")
|
132 |
-
|
133 |
-
# see structure
|
134 |
-
print(fleurs_asr)
|
135 |
-
|
136 |
-
# load audio sample on the fly
|
137 |
-
audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample
|
138 |
-
transcription = fleurs_asr["train"][0]["transcription"] # first transcription
|
139 |
-
# use `audio_input` and `transcription` to fine-tune your model for ASR
|
140 |
-
|
141 |
-
# for analyses see language groups
|
142 |
-
all_language_groups = fleurs_asr["train"].features["lang_group_id"].names
|
143 |
-
lang_group_id = fleurs_asr["train"][0]["lang_group_id"]
|
144 |
-
|
145 |
-
all_language_groups[lang_group_id]
|
146 |
-
```
|
147 |
-
|
148 |
-
#### Multilingual LibriSpeech (MLS)
|
149 |
-
|
150 |
-
*MLS* is a large multilingual corpus derived from read audiobooks from LibriVox and consists of 8 languages. For this challenge the training data is limited to 10-hours splits.
|
151 |
-
|
152 |
-
```py
|
153 |
-
from datasets import load_dataset
|
154 |
-
|
155 |
-
mls = load_dataset("google/xtreme_s", "mls.pl") # for Polish
|
156 |
-
# to download all data for multi-lingual fine-tuning uncomment following line
|
157 |
-
# mls = load_dataset("google/xtreme_s", "mls.all")
|
158 |
-
|
159 |
-
# see structure
|
160 |
-
print(mls)
|
161 |
-
|
162 |
-
# load audio sample on the fly
|
163 |
-
audio_input = mls["train"][0]["audio"] # first decoded audio sample
|
164 |
-
transcription = mls["train"][0]["transcription"] # first transcription
|
165 |
-
|
166 |
-
# use `audio_input` and `transcription` to fine-tune your model for ASR
|
167 |
-
```
|
168 |
-
|
169 |
-
#### VoxPopuli
|
170 |
-
|
171 |
-
*VoxPopuli* is a large-scale multilingual speech corpus for representation learning and semi-supervised learning, from which we use the speech recognition dataset. The raw data is collected from 2009-2020 European Parliament event recordings. We acknowledge the European Parliament for creating and sharing these materials.
|
172 |
-
|
173 |
-
**VoxPopuli has to download the whole dataset 100GB since languages
|
174 |
-
are entangled into each other - maybe not worth testing here due to the size**
|
175 |
-
|
176 |
-
```py
|
177 |
-
from datasets import load_dataset
|
178 |
-
|
179 |
-
voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.ro") # for Romanian
|
180 |
-
# to download all data for multi-lingual fine-tuning uncomment following line
|
181 |
-
# voxpopuli = load_dataset("google/xtreme_s", "voxpopuli.all")
|
182 |
-
|
183 |
-
# see structure
|
184 |
-
print(voxpopuli)
|
185 |
-
|
186 |
-
# load audio sample on the fly
|
187 |
-
audio_input = voxpopuli["train"][0]["audio"] # first decoded audio sample
|
188 |
-
transcription = voxpopuli["train"][0]["transcription"] # first transcription
|
189 |
-
|
190 |
-
# use `audio_input` and `transcription` to fine-tune your model for ASR
|
191 |
-
```
|
192 |
-
|
193 |
-
#### (Optionally) BABEL
|
194 |
-
|
195 |
-
*BABEL* from IARPA is a conversational speech recognition dataset in low-resource languages. First, download LDC2016S06, LDC2016S12, LDC2017S08, LDC2017S05 and LDC2016S13. BABEL is the only dataset in our benchmark who is less easily accessible, so you will need to sign in to get access to it on LDC. Although not officially part of the XTREME-S ASR datasets, BABEL is often used for evaluating speech representations on a difficult domain (phone conversations).
|
196 |
-
|
197 |
-
```py
|
198 |
-
from datasets import load_dataset
|
199 |
-
|
200 |
-
babel = load_dataset("google/xtreme_s", "babel.as")
|
201 |
-
```
|
202 |
-
|
203 |
-
**The above command is expected to fail with a nice error message,
|
204 |
-
explaining how to download BABEL**
|
205 |
-
|
206 |
-
The following should work:
|
207 |
-
|
208 |
-
```py
|
209 |
-
from datasets import load_dataset
|
210 |
-
babel = load_dataset("google/xtreme_s", "babel.as", data_dir="/path/to/IARPA_BABEL_OP1_102_LDC2016S06.zip")
|
211 |
-
|
212 |
-
# see structure
|
213 |
-
print(babel)
|
214 |
-
|
215 |
-
# load audio sample on the fly
|
216 |
-
audio_input = babel["train"][0]["audio"] # first decoded audio sample
|
217 |
-
transcription = babel["train"][0]["transcription"] # first transcription
|
218 |
-
# use `audio_input` and `transcription` to fine-tune your model for ASR
|
219 |
-
```
|
220 |
-
|
221 |
-
### 2. Speech Translation (ST)
|
222 |
-
|
223 |
-
We include the CoVoST-2 dataset for automatic speech translation.
|
224 |
-
|
225 |
-
#### CoVoST-2
|
226 |
-
|
227 |
-
The *CoVoST-2* benchmark has become a commonly used dataset for evaluating automatic speech translation. It covers language pairs from English into 15 languages, as well as 21 languages into English. We use only the "X->En" direction to evaluate cross-lingual representations. The amount of supervision varies greatly in this setting, from one hour for Japanese->English to 180 hours for French->English. This makes pretraining particularly useful to enable such few-shot learning. We enforce multiligual fine-tuning for simplicity. Results are splitted in high/med/low-resource language pairs as explained in the [paper (TODO(PVP))].
|
228 |
|
229 |
```py
|
230 |
from datasets import load_dataset
|
231 |
|
232 |
-
|
233 |
# to download all data for multi-lingual fine-tuning uncomment following line
|
234 |
-
#
|
235 |
-
|
236 |
-
# see structure
|
237 |
-
print(covost_2)
|
238 |
-
|
239 |
-
# load audio sample on the fly
|
240 |
-
audio_input = covost_2["train"][0]["audio"] # first decoded audio sample
|
241 |
-
transcription = covost_2["train"][0]["transcription"] # first transcription
|
242 |
-
|
243 |
-
translation = covost_2["train"][0]["translation"] # first translation
|
244 |
-
|
245 |
-
# use audio_input and translation to fine-tune your model for AST
|
246 |
-
```
|
247 |
-
|
248 |
-
### 3. Speech Classification
|
249 |
-
|
250 |
-
We include two multilingual speech classification datasets: FLEURS-LangID and Minds-14.
|
251 |
-
|
252 |
-
#### Language Identification - FLEURS-LangID
|
253 |
-
|
254 |
-
LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.
|
255 |
-
|
256 |
-
```py
|
257 |
-
from datasets import load_dataset
|
258 |
-
|
259 |
-
fleurs_langID = load_dataset("google/xtreme_s", "fleurs.all") # to download all data
|
260 |
-
|
261 |
-
# see structure
|
262 |
-
print(fleurs_langID)
|
263 |
-
|
264 |
-
# load audio sample on the fly
|
265 |
-
audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample
|
266 |
-
language_class = fleurs_langID["train"][0]["lang_id"] # first id class
|
267 |
-
language = fleurs_langID["train"].features["lang_id"].names[language_class]
|
268 |
-
|
269 |
-
# use audio_input and language_class to fine-tune your model for audio classification
|
270 |
-
```
|
271 |
-
|
272 |
-
#### Intent classification - Minds-14
|
273 |
-
|
274 |
-
Minds-14 is an intent classification made from e-banking speech datasets in 14 languages, with 14 intent labels. We impose a single multilingual fine-tuning to increase the size of the train and test sets and reduce the variance associated with the small size of the dataset per language.
|
275 |
-
|
276 |
-
```py
|
277 |
-
from datasets import load_dataset
|
278 |
-
|
279 |
-
minds_14 = load_dataset("google/xtreme_s", "minds14.fr-FR") # for French
|
280 |
-
# to download all data for multi-lingual fine-tuning uncomment following line
|
281 |
-
# minds_14 = load_dataset("google/xtreme_s", "minds14.all")
|
282 |
|
283 |
# see structure
|
284 |
print(minds_14)
|
@@ -289,65 +67,28 @@ intent_class = minds_14["train"][0]["intent_class"] # first transcription
|
|
289 |
intent = minds_14["train"].features["intent_class"].names[intent_class]
|
290 |
|
291 |
# use audio_input and language_class to fine-tune your model for audio classification
|
292 |
-
```
|
293 |
-
|
294 |
-
### 4. (Optionally) Speech Retrieval
|
295 |
-
|
296 |
-
We include one speech retrieval dataset: FLEURS-Retrieval.
|
297 |
-
|
298 |
-
TODO(Patrick)
|
299 |
-
|
300 |
-
#### FLEURS-Retrieval
|
301 |
-
|
302 |
-
FLEURS-Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use FLEURS-Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of FLEURS-Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.
|
303 |
-
|
304 |
-
```py
|
305 |
-
from datasets import load_dataset
|
306 |
-
|
307 |
-
fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.af_za") # for Afrikaans
|
308 |
-
# to download all data for multi-lingual fine-tuning uncomment following line
|
309 |
-
# fleurs_retrieval = load_dataset("google/xtreme_s", "fleurs.all")
|
310 |
-
|
311 |
-
# see structure
|
312 |
-
print(fleurs_retrieval)
|
313 |
-
|
314 |
-
# load audio sample on the fly
|
315 |
-
audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample
|
316 |
-
text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample
|
317 |
-
text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples
|
318 |
-
|
319 |
-
# use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval
|
320 |
-
```
|
321 |
-
|
322 |
-
Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.
|
323 |
|
324 |
## Dataset Structure
|
325 |
|
326 |
-
|
327 |
-
|
328 |
-
|
329 |
-
|
330 |
-
|
331 |
-
|
332 |
-
|
333 |
-
|
334 |
-
|
335 |
-
|
336 |
-
|
337 |
-
|
|
|
|
|
|
|
|
|
338 |
|
339 |
## Dataset Creation
|
340 |
|
341 |
-
The XTREME-S benchmark is composed of the following datasets:
|
342 |
-
|
343 |
-
- [FLEURS: TODO(PVP) link]
|
344 |
-
- [Multilingual Librispeech (MLS)](https://huggingface.co/datasets/facebook/multilingual_librispeech#dataset-creation)
|
345 |
-
- [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli#dataset-creation)
|
346 |
-
- [Minds14](https://huggingface.co/datasets/polyai/minds14#dataset-creation)
|
347 |
-
- [Covost2](https://huggingface.co/datasets/covost2#dataset-creation)
|
348 |
-
- [BABEL](https://huggingface.co/datasets/ldc/iarpa_babel#dataset-creation)
|
349 |
-
|
350 |
-
Please visit the corresponding dataset cards to get more information about the source data.
|
351 |
|
352 |
## Considerations for Using the Data
|
353 |
|
@@ -375,66 +116,30 @@ All datasets are licensed under the [Creative Commons license (CC-BY)](https://c
|
|
375 |
|
376 |
### Citation Information
|
377 |
|
378 |
-
#### XTREME-S
|
379 |
-
```
|
380 |
-
@article{conneau2022xtreme,
|
381 |
-
title={XTREME-S: Evaluating Cross-lingual Speech Representations},
|
382 |
-
author={Conneau, Alexis and Bapna, Ankur and Zhang, Yu and Ma, Min and von Platen, Patrick and Lozhkov, Anton and Cherry, Colin and Jia, Ye and Rivera, Clara and Kale, Mihir and others},
|
383 |
-
journal={arXiv preprint arXiv:2203.10752},
|
384 |
-
year={2022}
|
385 |
-
}
|
386 |
-
```
|
387 |
-
|
388 |
-
#### MLS
|
389 |
-
```
|
390 |
-
@article{Pratap2020MLSAL,
|
391 |
-
title={MLS: A Large-Scale Multilingual Dataset for Speech Research},
|
392 |
-
author={Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
|
393 |
-
journal={ArXiv},
|
394 |
-
year={2020},
|
395 |
-
volume={abs/2012.03411}
|
396 |
-
}
|
397 |
-
```
|
398 |
-
|
399 |
-
#### VoxPopuli
|
400 |
-
```
|
401 |
-
@article{wang2021voxpopuli,
|
402 |
-
title={Voxpopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation},
|
403 |
-
author={Wang, Changhan and Riviere, Morgane and Lee, Ann and Wu, Anne and Talnikar, Chaitanya and Haziza, Daniel and Williamson, Mary and Pino, Juan and Dupoux, Emmanuel},
|
404 |
-
journal={arXiv preprint arXiv:2101.00390},
|
405 |
-
year={2021}
|
406 |
-
}
|
407 |
-
```
|
408 |
-
|
409 |
-
#### CoVoST 2
|
410 |
```
|
411 |
-
@article{DBLP:journals/corr/abs-
|
412 |
-
author = {
|
413 |
-
|
414 |
-
|
415 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
416 |
journal = {CoRR},
|
417 |
-
volume = {abs/
|
418 |
-
year = {
|
419 |
-
url = {https://arxiv.org/abs/
|
420 |
eprinttype = {arXiv},
|
421 |
-
eprint = {
|
422 |
-
timestamp = {
|
423 |
-
biburl = {https://dblp.org/rec/journals/corr/abs-
|
424 |
bibsource = {dblp computer science bibliography, https://dblp.org}
|
425 |
}
|
426 |
```
|
427 |
|
428 |
-
#### Minds14
|
429 |
-
```
|
430 |
-
@article{gerz2021multilingual,
|
431 |
-
title={Multilingual and cross-lingual intent detection from spoken data},
|
432 |
-
author={Gerz, Daniela and Su, Pei-Hao and Kusztos, Razvan and Mondal, Avishek and Lis, Micha{\l} and Singhal, Eshan and Mrk{\v{s}}i{\'c}, Nikola and Wen, Tsung-Hsien and Vuli{\'c}, Ivan},
|
433 |
-
journal={arXiv preprint arXiv:2104.08524},
|
434 |
-
year={2021}
|
435 |
-
}
|
436 |
-
```
|
437 |
-
|
438 |
### Contributions
|
439 |
|
440 |
-
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten)
|
|
|
40 |
|
41 |
## Dataset Description
|
42 |
|
43 |
+
- **Fine-Tuning script:** [research-projects/xtreme-s](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
|
44 |
+
- **Paper:** [Multilingual and Cross-Lingual Intent Detection from Spoken Data](https://arxiv.org/abs/2104.08524)
|
45 |
+
- **Total amount of disk used:** ca. 5 GB
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
|
47 |
+
MINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14
|
48 |
+
intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.
|
49 |
|
50 |
+
## Example
|
|
|
51 |
|
52 |
+
MInDS-14 can be downloaded and used as follows:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
|
54 |
```py
|
55 |
from datasets import load_dataset
|
56 |
|
57 |
+
minds_14 = load_dataset("PolyAI/minds14", "fr-FR") # for French
|
58 |
# to download all data for multi-lingual fine-tuning uncomment following line
|
59 |
+
# minds_14 = load_dataset("PolyAI/all", "all")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
|
61 |
# see structure
|
62 |
print(minds_14)
|
|
|
67 |
intent = minds_14["train"].features["intent_class"].names[intent_class]
|
68 |
|
69 |
# use audio_input and language_class to fine-tune your model for audio classification
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
|
71 |
## Dataset Structure
|
72 |
|
73 |
+
An example of a datainstance of the config `fr-FR` looks as follows:
|
74 |
+
|
75 |
+
{
|
76 |
+
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
|
77 |
+
"audio": {
|
78 |
+
"path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
|
79 |
+
"array": array(
|
80 |
+
[0.0, 0.0, 0.0, ..., 0.0, 0.00048828, -0.00024414], dtype=float32
|
81 |
+
),
|
82 |
+
"sampling_rate": 8000,
|
83 |
+
},
|
84 |
+
"transcription": "je souhaite changer mon adresse",
|
85 |
+
"english_transcription": "I want to change my address",
|
86 |
+
"intent_class": 1,
|
87 |
+
"lang_id": 6,
|
88 |
+
}
|
89 |
|
90 |
## Dataset Creation
|
91 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
## Considerations for Using the Data
|
94 |
|
|
|
116 |
|
117 |
### Citation Information
|
118 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
119 |
```
|
120 |
+
@article{DBLP:journals/corr/abs-2104-08524,
|
121 |
+
author = {Daniela Gerz and
|
122 |
+
Pei{-}Hao Su and
|
123 |
+
Razvan Kusztos and
|
124 |
+
Avishek Mondal and
|
125 |
+
Michal Lis and
|
126 |
+
Eshan Singhal and
|
127 |
+
Nikola Mrksic and
|
128 |
+
Tsung{-}Hsien Wen and
|
129 |
+
Ivan Vulic},
|
130 |
+
title = {Multilingual and Cross-Lingual Intent Detection from Spoken Data},
|
131 |
journal = {CoRR},
|
132 |
+
volume = {abs/2104.08524},
|
133 |
+
year = {2021},
|
134 |
+
url = {https://arxiv.org/abs/2104.08524},
|
135 |
eprinttype = {arXiv},
|
136 |
+
eprint = {2104.08524},
|
137 |
+
timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
|
138 |
+
biburl = {https://dblp.org/rec/journals/corr/abs-2104-08524.bib},
|
139 |
bibsource = {dblp computer science bibliography, https://dblp.org}
|
140 |
}
|
141 |
```
|
142 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
143 |
### Contributions
|
144 |
|
145 |
+
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset
|