Helloworld668 commited on
Commit
c712892
1 Parent(s): 593b327

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +354 -1
README.md CHANGED
@@ -1,3 +1,356 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - crowdsourced
5
+ - machine-generated
6
+ language_creators:
7
+ - crowdsourced
8
+ - expert-generated
9
+ language:
10
+ - afr
11
+ - amh
12
+ - ara
13
+ - asm
14
+ - ast
15
+ - azj
16
+ - bel
17
+ - ben
18
+ - bos
19
+ - cat
20
+ - ceb
21
+ - cmn
22
+ - ces
23
+ - cym
24
+ - dan
25
+ - deu
26
+ - ell
27
+ - eng
28
+ - spa
29
+ - est
30
+ - fas
31
+ - ful
32
+ - fin
33
+ - tgl
34
+ - fra
35
+ - gle
36
+ - glg
37
+ - guj
38
+ - hau
39
+ - heb
40
+ - hin
41
+ - hrv
42
+ - hun
43
+ - hye
44
+ - ind
45
+ - ibo
46
+ - isl
47
+ - ita
48
+ - jpn
49
+ - jav
50
+ - kat
51
+ - kam
52
+ - kea
53
+ - kaz
54
+ - khm
55
+ - kan
56
+ - kor
57
+ - ckb
58
+ - kir
59
+ - ltz
60
+ - lug
61
+ - lin
62
+ - lao
63
+ - lit
64
+ - luo
65
+ - lav
66
+ - mri
67
+ - mkd
68
+ - mal
69
+ - mon
70
+ - mar
71
+ - msa
72
+ - mlt
73
+ - mya
74
+ - nob
75
+ - npi
76
+ - nld
77
+ - nso
78
+ - nya
79
+ - oci
80
+ - orm
81
+ - ory
82
+ - pan
83
+ - pol
84
+ - pus
85
+ - por
86
+ - ron
87
+ - rus
88
+ - bul
89
+ - snd
90
+ - slk
91
+ - slv
92
+ - sna
93
+ - som
94
+ - srp
95
+ - swe
96
+ - swh
97
+ - tam
98
+ - tel
99
+ - tgk
100
+ - tha
101
+ - tur
102
+ - ukr
103
+ - umb
104
+ - urd
105
+ - uzb
106
+ - vie
107
+ - wol
108
+ - xho
109
+ - yor
110
+ - yue
111
+ - zul
112
+ license:
113
+ - cc-by-4.0
114
+ multilinguality:
115
+ - multilingual
116
+ size_categories:
117
+ - 10K<n<100K
118
+ task_categories:
119
+ - automatic-speech-recognition
120
+ task_ids: []
121
+ pretty_name: 'The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech
122
+ (XTREME-S) benchmark is a benchmark designed to evaluate speech representations
123
+ across languages, tasks, domains and data regimes. It covers 102 languages from
124
+ 10+ language families, 3 different domains and 4 task families: speech recognition,
125
+ translation, classification and retrieval.'
126
+ tags:
127
+ - speech-recognition
128
  ---
129
+
130
+ # FLEURS
131
+
132
+ ## Dataset Description
133
+
134
+ - **Fine-Tuning script:** [pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition)
135
+ - **Paper:** [FLEURS: Few-shot Learning Evaluation of
136
+ Universal Representations of Speech](https://arxiv.org/abs/2205.12446)
137
+ - **Total amount of disk used:** ca. 350 GB
138
+
139
+ Fleurs is the speech version of the [FLoRes machine translation benchmark](https://arxiv.org/abs/2106.03193).
140
+ We use 2009 n-way parallel sentences from the FLoRes dev and devtest publicly available sets, in 102 languages.
141
+
142
+ Training sets have around 10 hours of supervision. Speakers of the train sets are different than speakers from the dev/test sets. Multilingual fine-tuning is
143
+ used and ”unit error rate” (characters, signs) of all languages is averaged. Languages and results are also grouped into seven geographical areas:
144
+
145
+ - **Western Europe**: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh*
146
+ - **Eastern Europe**: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*
147
+ - **Central-Asia/Middle-East/North-Africa**: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*
148
+ - **Sub-Saharan Africa**: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*
149
+ - **South-Asia**: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*
150
+ - **South-East Asia**: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*
151
+ - **CJK languages**: *Cantonese and Mandarin Chinese, Japanese, Korean*
152
+
153
+
154
+ ## How to use & Supported Tasks
155
+
156
+ ### How to use
157
+
158
+ The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
159
+
160
+ For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi_in" for Hindi):
161
+ ```python
162
+ from datasets import load_dataset
163
+ fleurs = load_dataset("google/fleurs", "hi_in", split="train")
164
+ ```
165
+
166
+ Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
167
+ ```python
168
+ from datasets import load_dataset
169
+ fleurs = load_dataset("google/fleurs", "hi_in", split="train", streaming=True)
170
+ print(next(iter(fleurs)))
171
+ ```
172
+
173
+ *Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
174
+
175
+ Local:
176
+
177
+ ```python
178
+ from datasets import load_dataset
179
+ from torch.utils.data.sampler import BatchSampler, RandomSampler
180
+ fleurs = load_dataset("google/fleurs", "hi_in", split="train")
181
+ batch_sampler = BatchSampler(RandomSampler(fleurs), batch_size=32, drop_last=False)
182
+ dataloader = DataLoader(fleurs, batch_sampler=batch_sampler)
183
+ ```
184
+
185
+ Streaming:
186
+
187
+ ```python
188
+ from datasets import load_dataset
189
+ from torch.utils.data import DataLoader
190
+ fleurs = load_dataset("google/fleurs", "hi_in", split="train")
191
+ dataloader = DataLoader(fleurs, batch_size=32)
192
+ ```
193
+
194
+ To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
195
+
196
+ ### Example scripts
197
+
198
+ Train your own CTC or Seq2Seq Automatic Speech Recognition models on FLEURS with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
199
+
200
+ Fine-tune your own Language Identification models on FLEURS with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
201
+
202
+ ### 1. Speech Recognition (ASR)
203
+
204
+ ```py
205
+ from datasets import load_dataset
206
+
207
+ fleurs_asr = load_dataset("google/fleurs", "af_za") # for Afrikaans
208
+ # to download all data for multi-lingual fine-tuning uncomment following line
209
+ # fleurs_asr = load_dataset("google/fleurs", "all")
210
+
211
+ # see structure
212
+ print(fleurs_asr)
213
+
214
+ # load audio sample on the fly
215
+ audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample
216
+ transcription = fleurs_asr["train"][0]["transcription"] # first transcription
217
+ # use `audio_input` and `transcription` to fine-tune your model for ASR
218
+
219
+ # for analyses see language groups
220
+ all_language_groups = fleurs_asr["train"].features["lang_group_id"].names
221
+ lang_group_id = fleurs_asr["train"][0]["lang_group_id"]
222
+
223
+ all_language_groups[lang_group_id]
224
+ ```
225
+
226
+ ### 2. Language Identification
227
+
228
+ LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.
229
+
230
+ ```py
231
+ from datasets import load_dataset
232
+
233
+ fleurs_langID = load_dataset("google/fleurs", "all") # to download all data
234
+
235
+ # see structure
236
+ print(fleurs_langID)
237
+
238
+ # load audio sample on the fly
239
+ audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample
240
+ language_class = fleurs_langID["train"][0]["lang_id"] # first id class
241
+ language = fleurs_langID["train"].features["lang_id"].names[language_class]
242
+
243
+ # use audio_input and language_class to fine-tune your model for audio classification
244
+ ```
245
+
246
+ ### 3. Retrieval
247
+
248
+ Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.
249
+
250
+ ```py
251
+ from datasets import load_dataset
252
+
253
+ fleurs_retrieval = load_dataset("google/fleurs", "af_za") # for Afrikaans
254
+ # to download all data for multi-lingual fine-tuning uncomment following line
255
+ # fleurs_retrieval = load_dataset("google/fleurs", "all")
256
+
257
+ # see structure
258
+ print(fleurs_retrieval)
259
+
260
+ # load audio sample on the fly
261
+ audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample
262
+ text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample
263
+ text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples
264
+
265
+ # use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval
266
+ ```
267
+
268
+ Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.
269
+
270
+ ## Dataset Structure
271
+
272
+ We show detailed information the example configurations `af_za` of the dataset.
273
+ All other configurations have the same structure.
274
+
275
+ ### Data Instances
276
+
277
+ **af_za**
278
+ - Size of downloaded dataset files: 1.47 GB
279
+ - Size of the generated dataset: 1 MB
280
+ - Total amount of disk used: 1.47 GB
281
+
282
+ An example of a data instance of the config `af_za` looks as follows:
283
+
284
+ ```
285
+ {'id': 91,
286
+ 'num_samples': 385920,
287
+ 'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav',
288
+ 'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav',
289
+ 'array': array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ...,
290
+ -1.1205673e-04, -8.4638596e-05, -1.2731552e-04], dtype=float32),
291
+ 'sampling_rate': 16000},
292
+ 'raw_transcription': 'Dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin',
293
+ 'transcription': 'dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin',
294
+ 'gender': 0,
295
+ 'lang_id': 0,
296
+ 'language': 'Afrikaans',
297
+ 'lang_group_id': 3}
298
+ ```
299
+
300
+ ### Data Fields
301
+
302
+ The data fields are the same among all splits.
303
+ - **id** (int): ID of audio sample
304
+ - **num_samples** (int): Number of float values
305
+ - **path** (str): Path to the audio file
306
+ - **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
307
+ - **raw_transcription** (str): The non-normalized transcription of the audio file
308
+ - **transcription** (str): Transcription of the audio file
309
+ - **gender** (int): Class id of gender
310
+ - **lang_id** (int): Class id of language
311
+ - **lang_group_id** (int): Class id of language group
312
+
313
+ ### Data Splits
314
+
315
+ Every config only has the `"train"` split containing of *ca.* 1000 examples, and a `"validation"` and `"test"` split each containing of *ca.* 400 examples.
316
+
317
+ ## Dataset Creation
318
+
319
+ We collect between one and three recordings for each sentence (2.3 on average), and buildnew train-dev-test splits with 1509, 150 and 350 sentences for
320
+ train, dev and test respectively.
321
+
322
+ ## Considerations for Using the Data
323
+
324
+ ### Social Impact of Dataset
325
+
326
+ This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).
327
+
328
+ ### Discussion of Biases
329
+
330
+ Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through FLEURS should generalize to all languages.
331
+
332
+ ### Other Known Limitations
333
+
334
+ The dataset has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on FLEURS should still correlate well with actual progress made for speech understanding.
335
+
336
+ ## Additional Information
337
+
338
+ All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
339
+
340
+ ### Citation Information
341
+
342
+ You can access the FLEURS paper at https://arxiv.org/abs/2205.12446.
343
+ Please cite the paper when referencing the FLEURS corpus as:
344
+
345
+ ```
346
+ @article{fleurs2022arxiv,
347
+ title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
348
+ author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera and Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur},
349
+ journal={arXiv preprint arXiv:2205.12446},
350
+ url = {https://arxiv.org/abs/2205.12446},
351
+ year = {2022},
352
+ ```
353
+
354
+ ### Contributions
355
+
356
+ Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) and [@aconneau](https://github.com/aconneau) for adding this dataset.