File size: 6,947 Bytes
e504538
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d16394
 
 
e504538
cdd0de8
 
 
 
 
 
 
 
 
e504538
7d16394
 
d9ae7bd
 
 
7d16394
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
---
configs:
- config_name: default
  data_files:
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
dataset_info:
  features:
  - name: speaker
    dtype: string
  - name: text
    dtype: string
  - name: accent
    dtype: string
  - name: raw_accent
    dtype: string
  - name: gender
    dtype: string
  - name: l1
    dtype: string
  - name: audio
    dtype: audio
  splits:
  - name: validation
    num_bytes: 2615574877.928
    num_examples: 9848
  - name: test
    num_bytes: 4926549782.438
    num_examples: 9289
  download_size: 6951164322
  dataset_size: 7542124660.365999
task_categories:
  - automatic-speech-recognition
  - audio-classification
---

## Dataset Description

- **Homepage:** [EdAcc: The Edinburgh International Accents of English Corpus](https://groups.inf.ed.ac.uk/edacc/index.html)
- **Paper:** [The Edinburgh International Accents of English Corpus: Towards the Democratization of English ASR](https://arxiv.org/abs/2303.18110)
- **Leaderboard:** [EdAcc Leaderboard](https://groups.inf.ed.ac.uk/edacc/leaderboard.html)


# EdAcc: The Edinburgh International Accents of English Corpus

The Edinburgh International Accents of English Corpus (EdAcc) is a new automatic speech recognition (ASR) dataset 
composed of 40 hours of English dyadic conversations between speakers with a diverse set of accents. EdAcc includes a 
wide range of first and second-language varieties of English and a linguistic background profile of each speaker. 
Results on latest public, and commercial models show that EdAcc highlights shortcomings of current English ASR models,
which perform well on existing benchmarks, but degrade significantly on speakers with different accents.

## Supported Tasks and Leaderboards

- Automatic Speech Recognition (ASR): the model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://groups.inf.ed.ac.uk/edacc/leaderboard.html and ranks models based on their WER scores on the dev and test sets
- Audio Classification: the model is presented with an audio file and asked to predict the accent or gender of the speaker. The most common evaluation metric is the percentage accuracy.

## How to use

The `datasets` library allows you to load and pre-process EdAcc in just 2 lines of code. The dataset can be 
downloaded from the Hugging Face Hub and pre-processed by using the `load_dataset` function. 

For example, the following code cell loads and pre-processes the EdAcc dataset, and subsequently returns the first sample
in the validation (dev) set:

```python
from datasets import load_dataset

edacc = load_dataset("edinburghcstr/edacc")
sample = edacc["validation"][0]
```

Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the 
`load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather 
than downloading the entire dataset to disk. The only change is that you can no longer access individual samples using 
Python indexing (i.e. `edacc["validation"][0]`). Instead, you have to iterate over the dataset, using a for loop for example:

```python
from datasets import load_dataset

edacc = load_dataset("edinburghcstr/edacc", streaming=True)
sample = next(iter(edacc["validation"]))
```

For more information, refer to the blog post [A Complete Guide to Audio Datasets](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).

## Dataset Structure

### Data Instances

A typical data point comprises the loaded audio sample, usually called `audio` and its transcription, called `text`. 
Some additional information about the speaker's gender, accent and native language (L1) are also provided:

```
{'speaker': 'EDACC-C06-A',
 'text': 'C ELEVEN DASH P ONE',
 'accent': 'Southern British English',
 'raw_accent': 'English',
 'gender': 'male',
 'l1': 'Southern British English',
 'audio': {'path': 'EDACC-C06-1.wav',
  'array': array([ 0.00000000e+00,  0.00000000e+00,  0.00000000e+00, ...,
         -3.05175781e-05, -3.05175781e-05, -6.10351562e-05]),
  'sampling_rate': 32000}}
```

### Data Fields

- speaker: the speaker id
- text: the target transcription of the audio file
- accent: the speaker accent as annotated by a trained linguist. These accents are standardised into common categories, as opposed to the `raw_accents`, which are free-form text descriptions of the speaker accent
- raw_accent: the speaker accent as described by the speaker themselves
- gender: the gender of the speaker
- l1: the native language (L1) of the speaker, standardised by the trained linguist
- audio: a dictionary containing the audio filename, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus, it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.

## Dataset Creation

The data collection process for EdAcc is structured to elicit natural speech. Participants conducted relaxed conversations over Zoom, 
accompanied by a comprehensive questionnaire to gather further metadata information. This questionnaire captures detailed information 
on participants' linguistic backgrounds, including their first and second languages, the onset of English learning, 
language use across different life domains, residential history, the nature of their relationship with their conversation 
partner, and self-perceptions of their English accent. Additionally, it collects data on participants' social demographics 
such as age, gender, ethnic background, and education level. The resultant conversations are transcribed by professional 
transcribers, ensuring each speaker's turn, along with any overlaps, environmental sounds, laughter, and hesitations, are 
accurately documented, contributing to the richness and authenticity of the dataset.


### Licensing Information

Public Domain, Creative Commons Attribution-ShareAlike International Public License ([CC-BY-SA](https://creativecommons.org/licenses/by-sa/4.0/deed.en))

### Citation Information

```
@inproceedings{sanabria23edacc,
   title="{The Edinburgh International Accents of English Corpus: Towards the Democratization of English ASR}",
   author={Sanabria, Ramon and Bogoychev, Nikolay and  Markl, Nina and Carmantini, Andrea and  Klejch, Ondrej and Bell, Peter},
   booktitle={ICASSP 2023},
   year={2023},
}
```

### Contributions

Thanks to [@sanchit-gandhi](https://huggingface.co/sanchit-gandhi) for adding this dataset.