Datasets:

File size: 8,807 Bytes
3e87cb1
8820cae
754ed2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e87cb1
754ed2d
 
 
 
e6c807c
 
 
b4471ba
754ed2d
 
 
f5d9d37
4c917be
0a4ca4d
01afcd8
 
 
 
f5d9d37
01afcd8
 
4c917be
754ed2d
 
 
f5d9d37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
01afcd8
4e6ba1b
 
f5d9d37
4c917be
f5d9d37
4c917be
f5d9d37
4c917be
4e6ba1b
 
f5d9d37
 
4e6ba1b
01afcd8
4e6ba1b
f5d9d37
 
4e6ba1b
 
 
f5d9d37
 
4e6ba1b
754ed2d
 
 
 
 
 
 
 
01afcd8
 
 
 
 
 
 
754ed2d
 
 
01afcd8
 
 
 
 
 
 
 
 
 
 
 
 
754ed2d
 
 
01afcd8
f5d9d37
 
 
754ed2d
 
 
 
 
01afcd8
 
 
 
 
 
 
 
 
 
 
754ed2d
 
 
 
 
01afcd8
 
754ed2d
 
 
01afcd8
754ed2d
 
 
 
 
01afcd8
754ed2d
 
 
01afcd8
754ed2d
 
 
01afcd8
754ed2d
 
 
 
 
01afcd8
754ed2d
 
 
 
 
 
 
 
 
 
 
 
 
01afcd8
754ed2d
 
 
01afcd8
754ed2d
 
 
01afcd8
 
 
 
 
 
 
 
 
754ed2d
 
 
5d983eb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
---
license: cc-by-4.0
task_categories:
- image-classification
language:
- en
tags:
- vlol
- v-lol
- ILP
- visual logical learning
- visual reasoning
- logic
- Inductive logic programming
- Symbolic AI
pretty_name: V-LoL
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name

## Dataset Description

- **Homepage** https://sites.google.com/view/v-lol/home
- **Repository** https://github.com/ml-research/vlol-dataset-gen
- **Paper** https://arxiv.org/abs/2306.07743
- **Point of Contact:** lukas_henrik.helff@tu-darmstadt.de

### Dataset Summary

This diagnostic dataset ([website](https://sites.google.com/view/v-lol), [paper](https://doi.org/10.48550/arXiv.2306.07743)) is specifically designed to evaluate the visual logical learning capabilities of machine learning models.
It offers a seamless integration of visual and logical challenges, providing 2D images of complex visual trains,
where the classification is derived from rule-based logic.
The fundamental idea of V-LoL remains to integrate the explicit logical learning tasks of classic symbolic AI benchmarks into visually complex scenes,
creating a unique visual input that retains the challenges and versatility of explicit logic.
In doing so, V-LoL bridges the gap between symbolic AI challenges and contemporary deep learning datasets offering various visual logical learning tasks
that pose challenges for AI models across a wide spectrum of AI research, from symbolic to neural and neuro-symbolic AI.
Moreover, we provide a flexible dataset generator ([GitHub](https://github.com/ml-research/vlol-dataset-gen)) that
empowers researchers to easily exchange or modify the logical rules, thereby enabling the creation of new datasets incorperating novel logical learning challenges.
By combining visual input with logical reasoning, this dataset serves as a comprehensive benchmark for assessing the ability
of machine learning models to learn and apply logical reasoning within a visual context.

### Supported Tasks and Leaderboards

We offer a diverse set of datasets that present challenging AI tasks targeting various reasoning abilities.
The following provides an overview of the available V-LoL challenges and corresponding dataset splits.

| V-LoL Challenges            | Train set | Validation set  |  # of train samples |  # of validation samples |
| ---         | ---   | -----------         | ---        |  -----------             |
| V-LoL-Trains-TheoryX | V-LoL-Trains-TheoryX | V-LoL-Trains-TheoryX |  10000 | 2000  |
| V-LoL-Trains-Numerical | V-LoL-Trains-Numerical | V-LoL-Trains-Numerical |  10000 | 2000  |
| V-LoL-Trains-Complex   | V-LoL-Trains-Complex | V-LoL-Trains-Complex |  10000 | 2000  |
| V-LoL-Blocks-TheoryX   | V-LoL-Blocks-TheoryX | V-LoL-Blocks-TheoryX |  10000 | 2000  |
| V-LoL-Blocks-Numerical | V-LoL-Blocks-Numerical | V-LoL-Blocks-Numerical |  10000 | 2000  |
| V-LoL-Blocks-Complex   | V-LoL-Blocks-Complex | V-LoL-Blocks-Complex |  10000 | 2000  |
| V-LoL-Trains-TheoryX-len7   | V-LoL-Trains-TheoryX | V-LoL-Trains-TheoryX-len7 |  12000 | 2000  |
| V-LoL-Trains-Numerical-len7 | V-LoL-Trains-Numerical | V-LoL-Trains-Numerical-len7 |  12000 | 2000  |
| V-LoL-Trains-Complex-len7   | V-LoL-Trains-Complex | V-LoL-Trains-Complex-len7 |  12000 | 2000  |
| V-LoL-Random-Trains-TheoryX | V-LoL-Trains-TheoryX | V-LoL-Random-Trains-TheoryX |  12000 | 12000  |
| V-LoL-Random-Blocks-TheoryX | V-LoL-Blocks-TheoryX | V-LoL-Random-Blocks-TheoryX |  12000 | 12000  |

The following gives more detailed explanations of the different V-LoL challenges:

Logical complexity:

- Theory X (marked 'TheoryX'): The train has either a short, closed car or a car with a barrel load is somewhere behind a car with a golden vase load. This rule was originally introduced as "Theory X" in the new East-West Challenge.

- Numerical rule (marked 'Numerical'): The train has a car where its car position equals its number of payloads which equals its number of wheel axles.

- Complex rule (marked 'Complex'): Either, there is a car with a car number which is smaller than its number of wheel axles count and smaller than the number of loads, or there is a short and a long car with the same colour where the position number of the short car is smaller than the number of wheel axles of the long car, or the train has three differently coloured cars. We refer to Tab. 3 in the supp. for more insights on required reasoning properties for each rule.

Visual complexity:

- Realistic train representaions. (marked 'Trains')
- Block representation. (marked 'Blocks')

OOD Trains:

- A train carrying 2-4 cars. (default)
- A train carrying 7 cars.  (marked 'len7')

Train attribute distributions:

- Michalski attribute distribution.  (default)
- Random attribute distribution.  (marked 'Random')

### Languages

English

## Dataset Structure

### Data Instances

```
{
  'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=480x270 at 0x1351D0EE0>,
  'label': 1
}
```


### Data Fields

The data instances have the following fields:

- image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded
    Decoding of a large number of image files might take a significant amount of time.
    Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- label: an int classification label.

Class labels mapping:
| ID | Class |
| --- | ----------- |
| 0 | Westbound |
| 1 | Eastbound |


### Data Splits


See tasks.



## Dataset Creation

### Curation Rationale

Despite the successes of recent developments in visual AI, different shortcomings still exist;
from missing exact logical reasoning, to abstract generalization abilities, to understanding complex and noisy scenes.
Unfortunately, existing benchmarks, were not designed to capture more than a few of these aspects.
Whereas deep learning datasets focus on visually complex data but simple visual reasoning tasks,
inductive logic datasets involve complex logical learning tasks, however, lack the visual component.
To address this, we propose the visual logical learning dataset, V-LoL, that seamlessly combines visual and logical challenges.
Notably, we introduce the first instantiation of V-LoL, V-LoL-Train, -- a visual rendition of a classic benchmark in symbolic AI, the Michalski train problem.
By incorporating intricate visual scenes and flexible logical reasoning tasks within a versatile framework,
V-LoL-Train provides a platform for investigating a wide range of visual logical learning challenges.
To create new V-LoL challenges, we provide a comprehensive guide and resources in our [GitHub repository](https://github.com/ml-research/vlol-dataset-gen).
The repository offers a collection of tools and code that enable researchers and practitioners to easily generate new V-LoL challenges based on their specific requirements. By referring to our GitHub repository, users can access the necessary documentation, code samples, and instructions to create and customize their own V-LoL challenges. 

### Source Data

#### Initial Data Collection and Normalization

The individual datasets are generated using the V-LoL-Train generator. See [GitHub repository](https://github.com/ml-research/vlol-dataset-gen).


#### Who are the source language producers?

See [GitHub repository](https://github.com/ml-research/vlol-dataset-gen).

### Annotations

#### Annotation process

The images are generated in two steps: first sampling a valid symbolic representation of a train and then visualizing it within a 3D scene.

#### Who are the annotators?

Annotations are automatically derived using a python, prolog, and blender pipline. See [GitHub repository](https://github.com/ml-research/vlol-dataset-gen).

### Personal and Sensitive Information

The dataset does not contain personal nor sensitive information.

## Considerations for Using the Data

### Social Impact of Dataset

The dataset has no social impact.

### Discussion of Biases

Please refer to our paper.

### Other Known Limitations

Please refer to our paper.

## Additional Information

### Dataset Curators

Lukas Helff

### Licensing Information

MIT License

### Citation Information

@misc{helff2023vlol,
      title={V-LoL: A Diagnostic Dataset for Visual Logical Learning}, 
      author={Lukas Helff and Wolfgang Stammer and Hikaru Shindo and Devendra Singh Dhami and Kristian Kersting},
      journal={Dataset available from https://sites.google.com/view/v-lol},
      year={2023},
      eprint={2306.07743},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}

### Contributions

Lukas Helff, Wolfgang Stammer, Hikaru Shindo, Devendra Singh Dhami, Kristian Kersting