Merge branch 'main' of https://huggingface.co/datasets/IVLLab/MultiDialog into main
Browse files
README.md
CHANGED
@@ -16,16 +16,74 @@ size_categories:
|
|
16 |
# path: test_freq/*, metadata.jsonl
|
17 |
---
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
-
# Dataset Description
|
21 |
-
This dataset includes manually annotated metadata linking audio files to transcriptions, emotions, and other attributes. The `test_freq.parquet` file contains these links and metadata.
|
22 |
|
23 |
-
## Data Fields
|
24 |
-
- `id`: unique identifier for each conversation.
|
25 |
-
- 'utterance' : uterrance index.
|
26 |
-
- `from`: who the message is from (human, gpt)
|
27 |
-
- `value`: the text of the utterance.
|
28 |
-
- `emotion`: the emotion of the utterance.
|
29 |
-
- `audpath`: path to the associated audio file.
|
30 |
|
31 |
|
|
|
16 |
# path: test_freq/*, metadata.jsonl
|
17 |
---
|
18 |
|
19 |
+
## Dataset Description
|
20 |
+
|
21 |
+
- **Homepage:** https://multidialog.github.io
|
22 |
+
- **Repository:** https://github.com/MultiDialog/MultiDialog
|
23 |
+
- **Paper:** https://arxiv.org/abs/2106.06909
|
24 |
+
- **Point of Contact:** [jinny960812@kaist.ac.kr](mailto:jinny960812@kaist.ac.kr)
|
25 |
+
- **Point of Contact:** [chaewonkim@kaist.ac.kr](mailto:chaewonkim@kaist.ac.kr)
|
26 |
+
|
27 |
+
## Dataset Description
|
28 |
+
This dataset includes manually annotated metadata linking audio files to transcriptions, emotions, and other attributes.
|
29 |
+
|
30 |
+
### Example Usage
|
31 |
+
There are 'train', 'test_freq', 'test_rare', 'valid_freq', and 'valid_rare' splits. Below is the example usage.
|
32 |
+
```python
|
33 |
+
from datasets import load_dataset
|
34 |
+
|
35 |
+
MultiD = load_dataset("IVLLab/MultiDialog", "valid_freq", use_auth_token=True)
|
36 |
+
|
37 |
+
# see structure
|
38 |
+
print(MultiD)
|
39 |
+
|
40 |
+
# load audio sample on the fly
|
41 |
+
audio_input = MultiD["valid_freq"][0]["audio"] # first decoded audio sample
|
42 |
+
transcription = MultiD["valid_freq"][0]["value"] # first transcription
|
43 |
+
```
|
44 |
+
|
45 |
+
### Supported Tasks
|
46 |
+
- `multimodal dialogue generation`
|
47 |
+
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
|
48 |
+
- `text-to-speech`: The dataset can also be used to train a model for Text-To-Speech (TTS).
|
49 |
+
|
50 |
+
### Languages
|
51 |
+
Multidialog contains audio and transcription data in English.
|
52 |
+
|
53 |
+
## Dataset Structure
|
54 |
+
### Data Instances
|
55 |
+
```python
|
56 |
+
{
|
57 |
+
'conv_id': 't_ffa55df6-114d-4b36-87a1-7af6b8b63d9b',
|
58 |
+
'utterance_id': 0,
|
59 |
+
'from': 'gpt',
|
60 |
+
'audio':
|
61 |
+
{
|
62 |
+
# in streaming mode 'path' will be 'xs_chunks_0000/YOU0000000315_S0000660.wav'
|
63 |
+
'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/9d48cf31/xs_chunks_0000/YOU0000000315_S0000660.wav',
|
64 |
+
'array': array([0.0005188 , 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621], dtype=float32),
|
65 |
+
'sampling_rate': 16000
|
66 |
+
},
|
67 |
+
'value': 'Are you a football fan?',
|
68 |
+
'emotion': 'Neutral',
|
69 |
+
'original_full_path': 'audio/youtube/P0004/YOU0000000315.opus'
|
70 |
+
}
|
71 |
+
```
|
72 |
+
|
73 |
+
### Data Fields
|
74 |
+
* conv_id (string) - unique identifier for each conversation.
|
75 |
+
* utterance_id (float) - uterrance index.
|
76 |
+
* from (string) - who the message is from (human, gpt).
|
77 |
+
* audio (Audio feature) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate.
|
78 |
+
In non-streaming mode (default), the path point to the locally extracted audio. In streaming mode, the path is the relative path of an audio.
|
79 |
+
segment inside its archive (as files are not downloaded and extracted locally).
|
80 |
+
* value (string) - transcription of the utterance.
|
81 |
+
* emotion (string) - the emotion of the utterance.
|
82 |
+
* original_full_path (string) - the relative path to the original full audio sample in the original data directory.
|
83 |
+
|
84 |
+
Emotion is assigned from the following labels:
|
85 |
+
"Neutral", "Happy", "Fear", "Angry", "Disgusting", "Surprising", "Sad"
|
86 |
|
|
|
|
|
87 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
88 |
|
89 |
|