sanchit-gandhi HF staff commited on
Commit
7d16394
1 Parent(s): e504538

add dataset card

Browse files
Files changed (1) hide show
  1. README.md +106 -2
README.md CHANGED
@@ -31,7 +31,111 @@ dataset_info:
31
  num_examples: 9289
32
  download_size: 6951164322
33
  dataset_size: 7542124660.365999
 
 
 
34
  ---
35
- # Dataset Card for "edacc-normalized"
36
 
37
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  num_examples: 9289
32
  download_size: 6951164322
33
  dataset_size: 7542124660.365999
34
+ task_categories:
35
+ - automatic-speech-recognition
36
+ - audio-classification
37
  ---
38
+ # EdAcc: : Towards the Democratization of English ASR
39
 
40
+ The Edinburgh International Accents of English Corpus (EdAcc) is a new automatic speech recognition (ASR) dataset
41
+ composed of 40 hours of English dyadic conversations between speakers with a diverse set of accents. EdAcc includes a
42
+ wide range of first and second-language varieties of English and a linguistic background profile of each speaker.
43
+
44
+ ## Supported Tasks and Leaderboards
45
+
46
+ - Automatic Speech Recognition (ASR): the model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://groups.inf.ed.ac.uk/edacc/leaderboard.html and ranks models based on their WER scores on the dev and test sets
47
+ - Audio Classification: the model is presented with an audio file and asked to predict the accent or gender of the speaker. The most common evaluation metric is the percentage accuracy.
48
+
49
+ ## Languages
50
+
51
+ The dataset is derived from read audiobooks from LibriVox and consists of 8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish
52
+
53
+ ## How to use
54
+
55
+ The `datasets` library allows you to load and pre-process EdAcc in just 2 lines of code. The dataset can be
56
+ downloaded from the Hugging Face Hub and pre-processed by using the `load_dataset` function.
57
+
58
+ For example, the following code cell loads and pre-processes the EdAcc dataset, and subsequently returns the first sample
59
+ in the validation (dev) set:
60
+
61
+ ```python
62
+ from datasets import load_dataset
63
+
64
+ edacc = load_dataset("edinburghcstr/edacc")
65
+ sample = edacc["validation"][0]
66
+ ```
67
+
68
+ Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the
69
+ `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather
70
+ than downloading the entire dataset to disk. The only change is that you can no longer access individual samples using
71
+ Python indexing (i.e. `edacc["validation"][0]`). Instead, you have to iterate over the dataset, using a for loop for example:
72
+
73
+ ```python
74
+ from datasets import load_dataset
75
+
76
+ edacc = load_dataset("edinburghcstr/edacc", streaming=True)
77
+ sample = next(iter(edacc["validation"]))
78
+ ```
79
+
80
+ For more information, refer to the blog post [A Complete Guide to Audio Datasets](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
81
+
82
+ ## Dataset Structure
83
+
84
+ ### Data Instances
85
+
86
+ A typical data point comprises the loaded audio sample, usually called `audio` and its transcription, called `text`.
87
+ Some additional information about the speaker's gender, accent and native language (L1) are also provided:
88
+
89
+ ```
90
+ {'speaker': 'EDACC-C06-A',
91
+ 'text': 'C ELEVEN DASH P ONE',
92
+ 'accent': 'Southern British English',
93
+ 'raw_accent': 'English',
94
+ 'gender': 'male',
95
+ 'l1': 'Southern British English',
96
+ 'audio': {'path': 'EDACC-C06-1.wav',
97
+ 'array': array([ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
98
+ -3.05175781e-05, -3.05175781e-05, -6.10351562e-05]),
99
+ 'sampling_rate': 32000}}
100
+ ```
101
+
102
+ ### Data Fields
103
+
104
+ - speaker: the speaker id
105
+ - text: the target transcription of the audio file
106
+ - accent: the speaker accent as annotated by a trained linguist. These accents are standardised into common categories, as opposed to the `raw_accents`, which are free-form text descriptions of the speaker accent
107
+ - raw_accent: the speaker accent as described by the speaker themselves
108
+ - gender: the gender of the speaker
109
+ - l1: the native language (L1) of the speaker, standardised by the trained linguist
110
+ - audio: a dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus, it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
111
+
112
+ ## Dataset Creation
113
+
114
+ The data collection process for EdAcc is structured to elicit natural speech. Participants conducted relaxed conversations over Zoom,
115
+ accompanied by a comprehensive questionnaire to gather further metadata information. This questionnaire captures detailed information
116
+ on participants' linguistic backgrounds, including their first and second languages, the onset of English learning,
117
+ language use across different life domains, residential history, the nature of their relationship with their conversation
118
+ partner, and self-perceptions of their English accent. Additionally, it collects data on participants' social demographics
119
+ such as age, gender, ethnic background, and education level. The resultant conversations are transcribed by professional
120
+ transcribers, ensuring each speaker's turn, along with any overlaps, environmental sounds, laughter, and hesitations, are
121
+ accurately documented, contributing to the richness and authenticity of the dataset.
122
+
123
+
124
+ ### Licensing Information
125
+
126
+ Public Domain, Creative Commons Attribution-ShareAlike International Public License ([CC-BY-SA](https://creativecommons.org/licenses/by-sa/4.0/deed.en))
127
+
128
+ ### Citation Information
129
+
130
+ ```
131
+ @inproceedings{sanabria23edacc,
132
+ title="{The Edinburgh International Accents of English Corpus: Towards the Democratization of English ASR}",
133
+ author={Sanabria, Ramon and Bogoychev, Nikolay and Markl, Nina and Carmantini, Andrea and Klejch, Ondrej and Bell, Peter},
134
+ booktitle={ICASSP 2023},
135
+ year={2023},
136
+ }
137
+ ```
138
+
139
+ ### Contributions
140
+
141
+ Thanks to [@sanchit-gandhi](https://huggingface.co/sanchit-gandhi) for adding this dataset.