Datasets:

Languages:
English
ArXiv:
License:
File size: 11,249 Bytes
95e6040
 
 
 
 
2c394d2
 
 
 
95e6040
2f1c582
2c394d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ef5ce4d
95e6040
 
86510c2
 
 
 
 
 
 
 
 
8fa2b3a
6c7100e
 
2c394d2
 
 
 
 
 
 
 
 
 
6c7100e
 
2c394d2
 
 
 
 
 
 
8fa2b3a
2c394d2
8fa2b3a
2c394d2
8fa2b3a
2c394d2
1d1fda9
 
4ce2664
1d1fda9
 
 
 
 
4ce2664
2c394d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1d1fda9
2c394d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
---
language:
- en
license:
- cc-by-nc-4.0
tags:
- object-centric learning
size_categories:
- 10K<n<100K
task_categories:
- image-segmentation
paperswithcode_id: octscenes
dataset_info:
  features:
    - name: scene_id
      dtype: string
    - name: frame_id
      dtype: string
    - name: resolution
      dtype: string
    - name: image
      dtype: image
    - name: depth
      dtype: image
    - name: segment
      dtype: image
    - name: intrinsic_matrix
      dtype: array
    - name: camera_pose
      dtype: array
  configs:
    - config_name: OCTScenes-A
      splits:
        - name: train
          num_examples: 3000
        - name: validation
          num_examples: 100
        - name: test
          num_examples: 100
    - config_name: OCTScenes-B
      splits:
        - name: train
          num_examples: 4800
        - name: validation
          num_examples: 100
        - name: test
          num_examples: 100
    
viewer: false
---

# OCTScenes: A Versatile Real-World Dataset of Tabletop Scenes for Object-Centric Learning

## Dataset Description

- **Paper:** [OCTScenes: A Versatile Real-World Dataset of Tabletop Scenes for Object-Centric Learning](https://arxiv.org/abs/2306.09682)
- **Team:** [FudanVI](https://github.com/FudanVI)
- **Point of Contact:** [Yinxuan Huang](yxhuang22@m.fudan.edu.cn)

### Dataset Summary

The OCTScenes dataset is a versatile real-world dataset of tabletop scenes for object-centric learning, containing 5000 tabletop scenes with a total of 15 objects. Each scene is captured in 60 frames covering a 360-degree perspective. It can satisfy the evaluation of object-centric learning methods based on single-image, video, and multi-view.

The 15 distinct types of objects are shown in Figure 1, and some examples of data are shown in Figure 2.

![Figure 1](assets/objects.png)

<p align="center">Figure 1: Objects of the dataset.</p>

![Figure 2](assets/datasets.png)

<p align="center">Figure 2: Examples of images, depth maps, and segmentation maps of the dataset.</p>

### Supported Tasks and Leaderboards

- `object-centric learning`: The dataset can be used to train a model for [object-centric learning](https://arxiv.org/abs/2202.07135), which aims to learn compositional scene representations in an unsupervised manner. The segmentation performance of model is measured by Adjusted Mutual Information (AMI), Adjusted Rand Index (ARI), and mean Intersection over Union (mIoU). Two variants of AMI and ARI are used to evaluate the segmentation performance more thoroughly. AMI-A and ARI-A are computed using pixels in the entire image and measure how accurately different layers of visual concepts (including both objects and the background) are separated. AMI-O and ARI-O are computed only using pixels in the regions of objects and focus on how accurately different objects are separated. The reconstruction performance of model is measured by Minimize Squared Error (MSE) and Learned Perceptual Image Patch Similarity (LPIPS). Success on this task is typically measured by achieving high AMI, ARI, and mIOU and low MSE and LPIPS.

### Languages

English.

## Dataset Structure

We provide images of three different resolutions for each scene: 640x480, 256x256, and 128x128. The name of each image is in the form `[scene_id]_[frame_id].png`. They are available in `./640x480`, `./256x256`, and `./128x128`, respectively.

The images are compressed using `tar` and the names of the compressed files start with the resolutions, e.g. `image_128x128_`. Please download all compressed files and use the `tar` command to decompress them.

For example, for the 128x128 resolution images, please download all the scene files starting with `image_128x128_*` and then merge the files into `image_128x128.tar.gz`:

```
cat image_128x128_* > image_128x128.tar.gz
```

And then decompress the file:

```
tar xvzf image_128x128.tar.gz
```

### Data Instances

Each data instance contains an RGB image, its depth map, its camera intrinsic matrix, its camera pose, and its segmentation map, which is None in the training and validation set.

### Data Fields

- `scene_id`: a string scene identifier for each example
- `frame_id`: a string frame identifier for each example
- `resolution`: a string for the image resolution of each example (e.g. 640x480, 256x256, 128x128)
- `image`: a `PIL.Image.Image` object containing the image
- `depth`: a `PIL.Image.Image` object containing the depth map
- `segment`: a `PIL.Image.Image` object containing the segmentation map, where the int number in each pixel represents the index of the object (ranges from 1 to 10, with 0 representing the background).
- `intrinsic_matrix`: a `numpy.ndarray` for the camera intrinsic matrix of each image
- `camera_pose`: a `numpy.ndarray` for the camera pose of each image

### Data Splits

The data is split into two subsets to create datasets with different levels of difficulty levels of difficulty. Both the two subsets are randomly divided into training, validation, and testing sets. The validation and testing sets each consist of 100 scenes, while the remaining scenes form the training set. Only the data in the testing set contain segmentation annotations for evaluation.

OCTScenes-A contains 3200 scenes (`scene_id` from 0000 to 3199) and includes only the first 11 object types, with scenes consisting of 1 to 6 objects, making it comparatively smaller and less complex. Images with `scene_id` ranging from 0000 to 2999 are used for training, images with `scene_id` ranging from 3000 to 3099 are for validation, and images with `scene_id` ranging from 3100 to 3199 are for testing.

OCTScenes-A contains 5000 scenes (`scene_id` from 0000 to 4999) and includes all 15 object types, with scenes consisting of 1 to 10 objects, resulting in a larger and more complex dataset. Images with `scene_id` ranging from 0000 to 4799 are used for training, images with `scene_id` ranging from 4800 to 4899 are for validation, and images with `scene_id` ranging from 4900 to 4999 are for testing.

<table align="center">
  <tr>
    <th style="text-align: center;">Dataset</th>
    <th colspan="3" style="text-align: center;">OCTScenes-A</th>
    <th colspan="3" style="text-align: center;">OCTScenes-B</th>
  </tr>
  <tr>
    <th style="text-align: center;">Resolution</th>
    <td align="center">640x480</td>
    <td align="center">256x256</td>
    <td align="center">128x128</td>
    <td align="center">640x480</td>
    <td align="center">256x256</td>
    <td align="center">128x128</td>
  </tr>
  <tr>
    <th style="text-align: center;">Split</th>
    <td align="center">train</td>
    <td align="center">validation</td>
    <td align="center">test</td>
    <td align="center">train</td>
    <td align="center">validation</td>
    <td align="center">test</td>
  </tr>
  <tr>
    <th style="text-align: center;">Number of scenes</th>
    <td align="center">3000</td>
    <td align="center">100</td>
    <td align="center">100</td>
    <td align="center">4800</td>
    <td align="center">100</td>
    <td align="center">100</td>
  </tr>
  <tr>
    <th style="text-align: center;">Number of object catergories</th>
    <td colspan="3" align="center">11</td>
    <td colspan="3" align="center">15</td>
  </tr>
  <tr>
    <th style="text-align: center;">Number of objects in a scene</th>
    <td colspan="3" align="center">1~6</td>
    <td colspan="3" align="center">1~10</td>
  </tr>
  <tr>
    <th style="text-align: center;">Number of views in a scene</th>
    <td colspan="3" align="center">60</td>
    <td colspan="3" align="center">60</td>
  </tr>
</table>


## Dataset Creation

### Curation Rationale

OCTScenes was designed as a novel benchmark for unsupervised object-centric learning. It serves as a versatile real-world dataset that aims to fill the scarcity of specifically tailored real-world datasets in this field.

### Source Data

#### Initial Data Collection and Normalization

A three-wheel omnidirectional wheel robot equipped with an Orbbec Astra 3D camera was employed for data collection. It took place in a school conference room, where a small wooden table was positioned on the floor and surrounded by baffles. Randomly selected objects, ranging from 1 to 10, were manually placed on the table without any stacking. The data was directly collected from these visual scenes.

### Annotations

#### Annotation process

- Segmentation Annotation: We use [EISeg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.8/EISeg), which is a high-performance interactive automatic annotation tool for image segmentation, to label the segmentation maps. We manually labeled 6 images of each scene and used the labeled images to train a supervision real-time semantic segmentation model named PP-LiteSeg using the framework [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) to annotation the rest of the data. The annotated images are split into 90% for training and 10% validation, achieving a mean Intersection over Union (mIoU) of 0.92 on the validation set.
- Intrinsic Matrix: We obtained the intrinsic matrix of the camera through camera calibration.
- Camera Pose:  We obtained the camera pose of each image through 3D reconstruction using [COLMAP](https://github.com/colmap/colmap), which is commonly used to create real-world NeRF datasets.

#### Who are the annotators?

Some annotations are manually labelled by the authors, while others are generated by the model.

### Personal and Sensitive Information

N/A

## Considerations for Using the Data

### Social Impact of Dataset

N/A

### Discussion of Biases

N/A

### Other Known Limitations

The main limitation of the dataset is its simplicity, characterized by a single background type and uncomplicated object shapes, most of which are symmetrical and lack the variation in orientation that occurs when viewed from different perspectives. Therefore, the object representations learned by the model are relatively simple, and some simple modeling methods may produce better segmentation results than complex modeling methods.

To overcome the aforementioned issue and enhance the dataset further, we have devised a plan for the next version of OCTScenes. In our future work, we will introduce a wider range of diverse and complex backgrounds, including tables of different types, patterns, and materials, and a greater variety of objects into the OCTScenes, particularly objects with asymmetric shapes, complex textures, and mixed colors, which will increase the complexity and diversity of the dataset.

## Additional Information

### Dataset Curators

The dataset was created by Yinxuan Huang, Tonglin Chen, Zhimeng Shen, Jinghao Huang, Bin Li, and Xiangyang Xue as members of the [Visual Intelligence Lab at Fudan University](https://github.com/FudanVI).

### Licensing Information

The dataset is available under [CC-BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/).

### Citation Information

```
@article{huang2023octscenes,
  title={OCTScenes: A Versatile Real-World Dataset of Tabletop Scenes for Object-Centric Learning},
  author={Huang, Yinxuan and Chen, Tonglin and Shen, Zhimeng and Huang, Jinghao and Li, Bin and Xue, Xiangyang},
  journal={arXiv preprint arXiv:2306.09682},
  year={2023}
}
```