Datasets:

Modalities:
Audio
Text
ArXiv:
Libraries:
Datasets
License:
reach-vb HF staff commited on
Commit
0d6249d
1 Parent(s): 956a35c

Making README more robust and verbose

Browse files

1. Updates old URLs to SpeechBench.
2. Adds code snippet for playing with the datasets.
3. Adds example scripts to further leverage this dataset.

Files changed (1) hide show
  1. README.md +45 -1
README.md CHANGED
@@ -366,7 +366,7 @@ Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) pag
366
  ### Supported Tasks and Leaderboards
367
 
368
  The results for models trained on the Common Voice datasets are available via the
369
- [🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
370
 
371
  ### Languages
372
 
@@ -374,6 +374,50 @@ The results for models trained on the Common Voice datasets are available via th
374
  Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
375
  ```
376
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
377
  ## Dataset Structure
378
 
379
  ### Data Instances
 
366
  ### Supported Tasks and Leaderboards
367
 
368
  The results for models trained on the Common Voice datasets are available via the
369
+ [🤗 Autoevaluate Leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=mozilla-foundation%2Fcommon_voice_11_0&only_verified=0&task=automatic-speech-recognition&config=ar&split=test&metric=wer)
370
 
371
  ### Languages
372
 
 
374
  Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Odia, Persian, Polish, Portuguese, Punjabi, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh
375
  ```
376
 
377
+ ### How to use
378
+
379
+ You should be able to plug and play this dataset in your existing Machine Learning workflow as follows:
380
+
381
+ You can download the entire dataset (or a particular split) to your local drive by using the `load_dataset` function.
382
+ ```python
383
+ from datasets import load_dataset
384
+
385
+ CV_11_hi_train = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
386
+ ```
387
+
388
+ Using datasets, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode allows one to iterate over the dataset without downloading it on disk.
389
+ ```python
390
+ from datasets import load_dataset
391
+
392
+ CV_11_hi_train_stream = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train", streaming=True)
393
+
394
+ # You'll now be able to iterate through the stream and fetch individual data points as you need them
395
+ print(next(iter(CV_11_hi_train_stream)))
396
+ ```
397
+
398
+ Bonus: You can create a pytorch dataloader with directly with the downloaded/ streamed datasets.
399
+ ```python
400
+ from datasets import load_dataset
401
+ from torch.utils.data.sampler import BatchSampler, RandomSampler
402
+
403
+ ds = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
404
+ batch_sampler = BatchSampler(RandomSampler(ds), batch_size=32, drop_last=False)
405
+ dataloader = DataLoader(ds, batch_sampler=batch_sampler)
406
+ ```
407
+
408
+ and, for streaming datasets
409
+ ```python
410
+ from datasets import load_dataset
411
+ from torch.utils.data import DataLoader
412
+
413
+ ds = load_dataset("mozilla-foundation/common_voice_11_0", "hi", split="train")
414
+ dataloader = DataLoader(ds, batch_size=32)
415
+ ```
416
+
417
+ ### Example scripts
418
+
419
+
420
+
421
  ## Dataset Structure
422
 
423
  ### Data Instances