Update README.md
Browse files
README.md
CHANGED
@@ -5,20 +5,20 @@ license: mit
|
|
5 |
# EDEN ASR Dataset
|
6 |
A subset of this data was used to support the development of empathetic feedback modules in [EDEN](https://arxiv.org/abs/2406.17982) and [its prior work](https://arxiv.org/abs/2404.13764).
|
7 |
|
8 |
-
|
9 |
3081 audio clips from 613 conversations and 163 users remained after filtering.
|
10 |
The filtering process removes audio clips containing only Mandarin, duplicates, and a subset of self-introductions from the users.
|
11 |
-
Each audio clip ranges from one second to two minutes. We did not collect demographic information
|
12 |
|
13 |
-
In our original work, we directly transcribed the speech with Whisper Medium. However, since the audio clips are accented speech,
|
14 |
|
15 |
## Dataset Columns
|
16 |
-
* **audio_url**: The URL to the audio clips; you can download them using wget to your local machines.
|
17 |
* **emotion_label**: We manually labeled a subset of the clips as **Neutral** (neutral emotions), **Negative** (the speaker is displaying negative emotions), or **Pauses** (there are a lot of pauses in the speech, potentially signaling language anxiety). The process is documented in [EDEN's prior work](https://arxiv.org/abs/2404.13764). When this field is empty, it means that the clip was not labeled.
|
18 |
* **corrected_whisper_transcript**: The high-quality transcript we have verified; the person performing verification is a native English speaker.
|
19 |
|
20 |
## Intended Use
|
21 |
-
|
22 |
|
23 |
## Code Example
|
24 |
|
@@ -38,7 +38,7 @@ print(dataset[0]["corrected_whisper_transcript"])
|
|
38 |
# Extract the emotion label
|
39 |
print(dataset[0]["emotion_label"])
|
40 |
|
41 |
-
# Check
|
42 |
negative_emotion_clips = dataset.filter(lambda example: example["emotion_label"] == "Negative")
|
43 |
print(len(negative_emotion_clips))
|
44 |
print(negative_emotion_clips[0])
|
|
|
5 |
# EDEN ASR Dataset
|
6 |
A subset of this data was used to support the development of empathetic feedback modules in [EDEN](https://arxiv.org/abs/2406.17982) and [its prior work](https://arxiv.org/abs/2404.13764).
|
7 |
|
8 |
+
The dataset contains audio clips of native Mandarin speakers. The speakers conversed with a chatbot hosted on an [English practice platform](https://dl.acm.org/doi/abs/10.1145/3491140.3528329?casa_token=ER-mfy0xauQAAAAA:FyDgmH0Y0ke7a6jpOnuycP1HRfeV1B5qaq5JWM5OV5dB9fLFL_vzVRUacZ4fUMRBDl71UeWMIA9Z).
|
9 |
3081 audio clips from 613 conversations and 163 users remained after filtering.
|
10 |
The filtering process removes audio clips containing only Mandarin, duplicates, and a subset of self-introductions from the users.
|
11 |
+
Each audio clip ranges from one second to two minutes. We did not collect demographic information to protect user identities.
|
12 |
|
13 |
+
In our original work, we directly transcribed the speech with Whisper Medium. However, since the audio clips are accented speech, these transcripts have instances of ASR error. We have **manually verified** the original transcripts to ensure they are high-quality.
|
14 |
|
15 |
## Dataset Columns
|
16 |
+
* **audio_url**: The URL to the audio clips; you can download them using wget or urllib to your local machines (see the code snippet below).
|
17 |
* **emotion_label**: We manually labeled a subset of the clips as **Neutral** (neutral emotions), **Negative** (the speaker is displaying negative emotions), or **Pauses** (there are a lot of pauses in the speech, potentially signaling language anxiety). The process is documented in [EDEN's prior work](https://arxiv.org/abs/2404.13764). When this field is empty, it means that the clip was not labeled.
|
18 |
* **corrected_whisper_transcript**: The high-quality transcript we have verified; the person performing verification is a native English speaker.
|
19 |
|
20 |
## Intended Use
|
21 |
+
Accented ASR research!
|
22 |
|
23 |
## Code Example
|
24 |
|
|
|
38 |
# Extract the emotion label
|
39 |
print(dataset[0]["emotion_label"])
|
40 |
|
41 |
+
# Check audio clips labeled as having negative emotions
|
42 |
negative_emotion_clips = dataset.filter(lambda example: example["emotion_label"] == "Negative")
|
43 |
print(len(negative_emotion_clips))
|
44 |
print(negative_emotion_clips[0])
|