Merge branch 'main' of https://huggingface.co/datasets/PolyAI/evi into main
Browse files
README.md
CHANGED
@@ -24,6 +24,11 @@ paperswithcode_id: evi-multilingual-spoken-dialogue-tasks-and-1
|
|
24 |
- **Paper:** [EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification](https://arxiv.org/abs/2204.13496)
|
25 |
- **Repository:** [Github](https://github.com/PolyAI-LDN/evi-paper)
|
26 |
|
|
|
|
|
|
|
|
|
|
|
27 |
## Example
|
28 |
EVI can be downloaded and used as follows:
|
29 |
|
@@ -33,9 +38,80 @@ evi = load_dataset("PolyAI/evi", "en-GB") # for British English
|
|
33 |
|
34 |
# to download data from all locales use:
|
35 |
# evi = load_dataset("PolyAI/evi", "all")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
```
|
37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
### Licensing Information
|
|
|
39 |
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
|
40 |
|
41 |
### Citation Information
|
@@ -53,4 +129,7 @@ All datasets are licensed under the [Creative Commons license (CC-BY)](https://c
|
|
53 |
url = {https://arxiv.org/abs/2204.13496},
|
54 |
booktitle = {Findings of NAACL (publication pending)}
|
55 |
}
|
56 |
-
```
|
|
|
|
|
|
|
|
24 |
- **Paper:** [EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification](https://arxiv.org/abs/2204.13496)
|
25 |
- **Repository:** [Github](https://github.com/PolyAI-LDN/evi-paper)
|
26 |
|
27 |
+
EVI is a challenging spoken multilingual dataset
|
28 |
+
with 5,506 dialogues in English, Polish, and French
|
29 |
+
that can be used for benchmarking and developing
|
30 |
+
knowledge-based enrolment, identification, and identification for spoken dialogue systems.
|
31 |
+
|
32 |
## Example
|
33 |
EVI can be downloaded and used as follows:
|
34 |
|
|
|
38 |
|
39 |
# to download data from all locales use:
|
40 |
# evi = load_dataset("PolyAI/evi", "all")
|
41 |
+
|
42 |
+
# see structure
|
43 |
+
print(evi)
|
44 |
+
```
|
45 |
+
|
46 |
+
## Dataset Structure
|
47 |
+
|
48 |
+
We show detailed information of the example for the `en-GB` configuration of the dataset.
|
49 |
+
All other configurations have the same structure.
|
50 |
+
|
51 |
+
### Data Instances
|
52 |
+
|
53 |
+
An example of a data instance of the config `en-GB` looks as follows:
|
54 |
+
|
55 |
+
```
|
56 |
+
{
|
57 |
+
"language": 0,
|
58 |
+
"dialogue_id": "CA0007220161df7be23f4554704c8720f5",
|
59 |
+
"speaker_id": "e80e9bdd33eda593f16a1b6f2fb228ff",
|
60 |
+
"turn_id": 0,
|
61 |
+
"target_profile_id": "en.GB.608",
|
62 |
+
"asr_transcription": "w20 a b",
|
63 |
+
"asr_nbest'": ["w20 a b", "w20 a bee", "w20 a baby"],
|
64 |
+
"path": "audios/en/CA0007220161df7be23f4554704c8720f5/0.wav",
|
65 |
+
"audio": {
|
66 |
+
"path": "/home/georgios/.cache/huggingface/datasets/downloads/extracted/0335ebc25feace53243133b49ba17ba18e26f0f97cb083ffdf4e73dd7427b443/audios/en/CA0007220161df7be23f4554704c8720f5/0.wav",
|
67 |
+
"array": array([ 0.00024414, 0.00024414, 0.00024414, ..., 0.00024414,
|
68 |
+
-0.00024414, 0.00024414], dtype=float32),
|
69 |
+
"sampling_rate": 8000,
|
70 |
+
}
|
71 |
+
}
|
72 |
```
|
73 |
|
74 |
+
### Data Fields
|
75 |
+
The data fields are the same among all splits.
|
76 |
+
- **language** (int): ID of language
|
77 |
+
- **dialogue_id** (str): the ID of the dialogue
|
78 |
+
- **speaker_id** (str): the ID of the speaker
|
79 |
+
- **turn_id** (int)": the ID of the turn
|
80 |
+
- **target_profile_id** (str): the ID of the target profile
|
81 |
+
- **asr_transcription** (str): ASR transcription of the audio file
|
82 |
+
- **asr_nbest** (list): n-best ASR transcriptions of the audio file
|
83 |
+
- **path** (str): Path to the audio file
|
84 |
+
- **audio** (dict): Audio object including loaded audio array, sampling rate and path of audio
|
85 |
+
|
86 |
+
|
87 |
+
### Data Splits
|
88 |
+
Every config only has the `"test"` split containing *ca.* 1,800 dialogues.
|
89 |
+
|
90 |
+
## Dataset Creation
|
91 |
+
|
92 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
93 |
+
|
94 |
+
## Considerations for Using the Data
|
95 |
+
|
96 |
+
### Social Impact of Dataset
|
97 |
+
|
98 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
99 |
+
|
100 |
+
### Discussion of Biases
|
101 |
+
|
102 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
103 |
+
|
104 |
+
### Other Known Limitations
|
105 |
+
|
106 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
107 |
+
|
108 |
+
## Additional Information
|
109 |
+
|
110 |
+
### Dataset Curators
|
111 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
112 |
+
|
113 |
### Licensing Information
|
114 |
+
|
115 |
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
|
116 |
|
117 |
### Citation Information
|
|
|
129 |
url = {https://arxiv.org/abs/2204.13496},
|
130 |
booktitle = {Findings of NAACL (publication pending)}
|
131 |
}
|
132 |
+
```
|
133 |
+
|
134 |
+
### Contributions
|
135 |
+
Thanks to [@polinaeterna](https://github.com/polinaeterna) for helping with adding this dataset
|