Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,166 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-sa-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-sa-4.0
|
3 |
+
task_categories:
|
4 |
+
- zero-shot-classification
|
5 |
+
- zero-shot-image-classification
|
6 |
+
language:
|
7 |
+
- ar
|
8 |
+
- el
|
9 |
+
- en
|
10 |
+
- hi
|
11 |
+
- ja
|
12 |
+
- ko
|
13 |
+
- te
|
14 |
+
- th
|
15 |
+
- uk
|
16 |
+
- zh
|
17 |
+
tags:
|
18 |
+
- multimodal
|
19 |
+
- representation learning
|
20 |
+
- multilingual
|
21 |
+
pretty_name: Symile-M3
|
22 |
+
size_categories:
|
23 |
+
- 10M<n<100M
|
24 |
---
|
25 |
+
# Dataset Card for Symile-M3
|
26 |
+
Symile-M3 is a multilingual dataset of (audio, image, text) samples. The dataset is specifically designed to test a model's ability to capture higher-order information between three distinct high-dimensional data types: by incorporating multiple languages, we construct a task where text and audio are both needed to predict the image, and where, importantly, neither text nor audio alone would suffice.
|
27 |
+
- Paper: https://arxiv.org/abs/2411.01053
|
28 |
+
- GitHub: https://github.com/rajesh-lab/symile
|
29 |
+
- Questions & Discussion: https://www.alphaxiv.org/abs/2411.01053v1
|
30 |
+
|
31 |
+
## Overview
|
32 |
+
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/66d8e34b27d76ef6e481c2b5/mR0kJkgVyUK5rTNUOCOFx.jpeg)
|
33 |
+
|
34 |
+
Let `w` represent the number of languages in the dataset (`w=2`, `w=5`, and `w=10` correspond to Symile-M3-2, Symile-M3-5, and Symile-M3-10, respectively). An (audio, image, text) sample is generated by first drawing a short one-sentence audio clip from [Common Voice](https://commonvoice.mozilla.org/en/datasets) spoken in one of `w` languages with equal probability. An image is drawn from [ImageNet](https://www.image-net.org/) that corresponds to one of 1,000 classes with equal probability. Finally, text containing exactly `w` words is generated based on the drawn audio and image: one of the `w` words in the text is the drawn image class name in the drawn audio language. The remaining `w-1` words are randomly chosen from the ImageNet class names and written in one of the `w` languages such that there is no overlap in language or class name across the `w` words in the text. The words are separated by underscores, and their order is randomized.
|
35 |
+
|
36 |
+
## Tasks
|
37 |
+
The dataset was designed to evaluate a model on the zero-shot retrieval task of finding an image of the appropriate class given the audio and text. The most probable image for a given query audio and text pair, selected from all possible candidate images in the test set, is that with the highest similarity score.
|
38 |
+
|
39 |
+
The dataset was designed to ensure that neither text nor audio alone would suffice to predict the image. Therefore, success on this zero-shot retrieval task hinges on a model's ability to capture joint information between the three modalities.
|
40 |
+
|
41 |
+
### Dataset Structure
|
42 |
+
|
43 |
+
Each sample in the dataset is a dictionary containing the following fields:
|
44 |
+
|
45 |
+
```python
|
46 |
+
{
|
47 |
+
# language code of the audio clip
|
48 |
+
'lang': 'ja',
|
49 |
+
|
50 |
+
# audio data
|
51 |
+
'audio': {
|
52 |
+
'path': 'common_voice_ja_39019065.mp3', # Common Voice filename
|
53 |
+
'array': array([0.00000000e+00, ..., 7.78421963e-06]), # raw audio waveform
|
54 |
+
'sampling_rate': 32000 # sampling rate in Hz
|
55 |
+
},
|
56 |
+
|
57 |
+
# image as a PIL Image object (RGB, size varies)
|
58 |
+
'image': <PIL.JpegImageFile image mode=RGB size=500x375>,
|
59 |
+
|
60 |
+
# text containing w words (one per language) separated by underscores
|
61 |
+
'text': 'σπιτάκι πουλιών_ドーム_प्रयोगशाला कोट_мавпа-павук_gown',
|
62 |
+
|
63 |
+
# target word class name in English (key in translations.json)
|
64 |
+
'cls': 'dome',
|
65 |
+
|
66 |
+
# class ID from translations.json (0 to 999)
|
67 |
+
'cls_id': 538,
|
68 |
+
|
69 |
+
# target word (class name in the language of the audio)
|
70 |
+
'target_text': 'ドーム'
|
71 |
+
}
|
72 |
+
```
|
73 |
+
|
74 |
+
The dataset includes a `translations.json` file that maps ImageNet class names across all supported languages. Each entry contains:
|
75 |
+
- The English class name as the key
|
76 |
+
- Translations for all supported languages (`ar`, `el`, `en`, `hi`, `ja`, `ko`, `te`, `th`, `uk`, `zh-CN`)
|
77 |
+
- The ImageNet synset ID
|
78 |
+
- A unique class ID (0-999)
|
79 |
+
|
80 |
+
Example structure:
|
81 |
+
```json
|
82 |
+
{
|
83 |
+
"tench": {
|
84 |
+
"synset_id": "n01440764",
|
85 |
+
"cls_id": 0,
|
86 |
+
"ar": "سمك البنش",
|
87 |
+
"el": "είδος κυπρίνου",
|
88 |
+
"en": "tench",
|
89 |
+
"hi": "टेंच",
|
90 |
+
"ja": "テンチ",
|
91 |
+
"ko": "텐치",
|
92 |
+
"te": "టెంచ్",
|
93 |
+
"th": "ปลาเทนช์",
|
94 |
+
"uk": "линь",
|
95 |
+
"zh-CN": "丁鱥"
|
96 |
+
}
|
97 |
+
}
|
98 |
+
```
|
99 |
+
|
100 |
+
## Dataset Variants
|
101 |
+
We release three variants of the dataset:
|
102 |
+
- Symile-M3-2 with 2 languages: English (`en`) and Greek (`el`).
|
103 |
+
- Symile-M3-5 with 5 languages: English (`en`), Greek (`el`), Hindi (`hi`), Japanese (`ja`), and Ukrainian (`uk`).
|
104 |
+
- Symile-M3-10 with 10 languages: Arabic (`ar`), Greek (`el`), English (`en`), Hindi (`hi`), Japanese (`ja`), Korean (`ko`), Telugu (`te`), Thai (`th`), Ukrainian (`uk`), and Chinese (`zh-CN`).
|
105 |
+
|
106 |
+
Each variant is available in four sizes:
|
107 |
+
- Large (`l`): 10M training samples, 500K validation samples, 500K test samples
|
108 |
+
- Medium (`m`): 5M training samples, 250K validation samples, 250K test samples
|
109 |
+
- Small (`s`): 1M training samples, 50K validation samples, 50K test samples
|
110 |
+
- Extra Small (`xs`): 500K training samples, 25K validation samples, 25K test samples
|
111 |
+
|
112 |
+
## Usage
|
113 |
+
|
114 |
+
Before using the dataset, ensure you have the required audio and image processing libraries installed:
|
115 |
+
```bash
|
116 |
+
pip install librosa soundfile pillow
|
117 |
+
```
|
118 |
+
|
119 |
+
To load a specific version of Symile-M3, use a configuration name following the pattern `symile-m3-{num_langs}-{size}` where:
|
120 |
+
- `num_langs` is `2`, `5`, or `10`
|
121 |
+
- `size` is `xs`, `s`, `m`, or `l`
|
122 |
+
|
123 |
+
For example, to load the `xs` version of Symile-M3-5:
|
124 |
+
|
125 |
+
```python
|
126 |
+
from datasets import load_dataset
|
127 |
+
|
128 |
+
dataset = load_dataset("arsaporta/symile-m3", "symile-m3-5-xs")
|
129 |
+
|
130 |
+
print(dataset['train'][0]) # access first train sample
|
131 |
+
print(len(dataset['train'])) # get number of train samples
|
132 |
+
```
|
133 |
+
|
134 |
+
To process the dataset without loading it entirely into memory, use streaming mode to load samples one at a time:
|
135 |
+
|
136 |
+
```python
|
137 |
+
from datasets import load_dataset
|
138 |
+
|
139 |
+
dataset = load_dataset("arsaporta/symile-m3", "symile-m3-5-xs", streaming=True)
|
140 |
+
|
141 |
+
print(next(iter(dataset['train'])))
|
142 |
+
```
|
143 |
+
|
144 |
+
To download the dataset for offline use:
|
145 |
+
|
146 |
+
```python
|
147 |
+
from huggingface_hub import snapshot_download
|
148 |
+
|
149 |
+
local_dir = snapshot_download(
|
150 |
+
repo_id="arsaporta/symile-m3",
|
151 |
+
repo_type="dataset",
|
152 |
+
local_dir="./symile_data", # where to save
|
153 |
+
subfolder="symile-m3-5-xs", # which configuration to download
|
154 |
+
)
|
155 |
+
```
|
156 |
+
|
157 |
+
## Citation
|
158 |
+
|
159 |
+
```
|
160 |
+
@inproceedings{saporta2024symile,
|
161 |
+
title = {Contrasting with Symile: Simple Model-Agnostic Representation Learning for Unlimited Modalities}
|
162 |
+
author = {Saporta, Adriel and Puli, Aahlad and Goldstein, Mark and Ranganath, Rajesh}
|
163 |
+
booktitle = {Advances in Neural Information Processing Systems},
|
164 |
+
year = {2024}
|
165 |
+
}
|
166 |
+
```
|