karolyartur commited on
Commit
6b50281
1 Parent(s): 280eea6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -0
README.md CHANGED
@@ -1,3 +1,138 @@
1
  ---
 
 
2
  license: gpl-3.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: gpl-3.0
5
+ tags:
6
+ - vision
7
+ - image segmentation
8
+ - instance segmentation
9
+ - object detection
10
+ - synthetic
11
+ - sim-to-real
12
+ annotations_creators:
13
+ - machine-generated
14
+ pretty_name: OE Dataset
15
+ size_categories:
16
+ - 1K<n<10K
17
+ task_categories:
18
+ - object-detection
19
+ - image-segmentation
20
+ - robotics
21
+ task_ids:
22
+ - instance-segmentation
23
+ - semantic-segmentation
24
  ---
25
+
26
+ # The OE Dataset!
27
+
28
+ ![OE demo](https://huggingface.co/datasets/ABC-iRobotics/oe_dataset/resolve/main/OE_demo.gif "OE demo")
29
+
30
+ A dataset consisting of synthetic and real images annotated with instance segmentation masks for testing sim-to-real model performance for robotic manipulation.
31
+
32
+ ### Dataset Summary
33
+
34
+ The OE Dataset is a collection of synthetic and real images of 3D-printed OE logos. Each image is annotated with instance segmentation masks. The dataset explicitly marks synthetic samples based on their creation method (either photorealistic synthetic samples or domain randomized samples) to facilitate sim-to-real performance tests on different synthetic datasets.
35
+
36
+ ### Supported Tasks and Leaderboards
37
+
38
+ The dataset supports tasks such as semantic segmentation, instance segmentation, object detection, image classification, and testing sim-to-real transfer.
39
+
40
+ ## Dataset Structure
41
+
42
+ ### Data Instances
43
+
44
+ The instances of the dataset are 1920x1080x3 images in PNG format. The annotations are 1920x1080x4 PNG images representing the instance segmentation masks, where each instance is associated with a unique color.
45
+
46
+ ### Data Fields
47
+
48
+ The data fields are:
49
+
50
+ 1) 'image': 1920x1080x3 PNG image
51
+ 2) 'mask': 1920x1080x4 PNG image
52
+
53
+ ### Data Splits
54
+
55
+ The dataset contains training and validation splits for all image collections (real images, photorealistic synthetic images, domain randomized synthetic images) to facilitate cross-domain testing.
56
+
57
+ ## Dataset Creation
58
+
59
+ ### Curation Rationale
60
+
61
+ The dataset was created to provide a testbed for examining the effects of fine-tuning instance segmentation models on synthetic data (using various sim-to-real approaches).
62
+
63
+ ### Source Data
64
+
65
+ The data is generated using two methods:
66
+
67
+ - Real images are recorded using a robotic setup and automatically annotated using the method proposed in [[1]](https://ieeexplore.ieee.org/abstract/document/9922852)
68
+ - Synthetic samples are generated using Blender and annotated using the [Blender Annotation Tool (BAT)](https://github.com/ABC-iRobotics/blender_annotation_tool)
69
+
70
+ ### Citation Information
71
+
72
+ OE Dataset:
73
+ ```bibtex
74
+ @ARTICLE{10145828,
75
+ author={Károly, Artúr István and Tirczka, Sebestyén and Gao, Huijun and Rudas, Imre J. and Galambos, Péter},
76
+ journal={IEEE Transactions on Cybernetics},
77
+ title={Increasing the Robustness of Deep Learning Models for Object Segmentation: A Framework for Blending Automatically Annotated Real and Synthetic Data},
78
+ year={2023},
79
+ volume={},
80
+ number={},
81
+ pages={1-14},
82
+ doi={10.1109/TCYB.2023.3276485}}
83
+ ```
84
+
85
+ Automatically annotating real images with instance segmentation masks using a robotic arm:
86
+ ```bibtex
87
+ @INPROCEEDINGS{9922852,
88
+ author={Károly, Artúr I. and Károly, Ármin and Galambos, Péter},
89
+ booktitle={2022 IEEE 10th Jubilee International Conference on Computational Cybernetics and Cyber-Medical Systems (ICCC)},
90
+ title={Automatic Generation and Annotation of Object Segmentation Datasets Using Robotic Arm},
91
+ year={2022},
92
+ volume={},
93
+ number={},
94
+ pages={000063-000068},
95
+ doi={10.1109/ICCC202255925.2022.9922852}}
96
+ ```
97
+
98
+ Synthetic dataset generation and annotation method:
99
+ ```bibtex
100
+ @INPROCEEDINGS{9780790,
101
+ author={Károly, Artúr I. and Galambos, Péter},
102
+ booktitle={2022 IEEE 20th Jubilee World Symposium on Applied Machine Intelligence and Informatics (SAMI)},
103
+ title={Automated Dataset Generation with Blender for Deep Learning-based Object Segmentation},
104
+ year={2022},
105
+ volume={},
106
+ number={},
107
+ pages={000329-000334},
108
+ doi={10.1109/SAMI54271.2022.9780790}}
109
+ ```
110
+
111
+ Other related publications:
112
+ ```bibtex
113
+ @INPROCEEDINGS{10029564,
114
+ author={Károly, Artúr I. and Tirczka, Sebestyén and Piricz, Tamás and Galambos, Péter},
115
+ booktitle={2022 IEEE 22nd International Symposium on Computational Intelligence and Informatics and 8th IEEE International Conference on Recent Achievements in Mechatronics, Automation, Computer Science and Robotics (CINTI-MACRo)},
116
+ title={Robotic Manipulation of Pathological Slides Powered by Deep Learning and Classical Image Processing},
117
+ year={2022},
118
+ volume={},
119
+ number={},
120
+ pages={000387-000392},
121
+ doi={10.1109/CINTI-MACRo57952.2022.10029564}}
122
+ ```
123
+
124
+ ```bibtex
125
+ @Article{app13010525,
126
+ AUTHOR = {Károly, Artúr István and Galambos, Péter},
127
+ TITLE = {Task-Specific Grasp Planning for Robotic Assembly by Fine-Tuning GQCNNs on Automatically Generated Synthetic Data},
128
+ JOURNAL = {Applied Sciences},
129
+ VOLUME = {13},
130
+ YEAR = {2023},
131
+ NUMBER = {1},
132
+ ARTICLE-NUMBER = {525},
133
+ URL = {https://www.mdpi.com/2076-3417/13/1/525},
134
+ ISSN = {2076-3417},
135
+ ABSTRACT = {In modern robot applications, there is often a need to manipulate previously unknown objects in an unstructured environment. The field of grasp-planning deals with the task of finding grasps for a given object that can be successfully executed with a robot. The predicted grasps can be evaluated according to certain criteria, such as analytical metrics, similarity to human-provided grasps, or the success rate of physical trials. The quality of a grasp also depends on the task which will be carried out after the grasping is completed. Current task-specific grasp planning approaches mostly use probabilistic methods, which utilize categorical task encoding. We argue that categorical task encoding may not be suitable for complex assembly tasks. This paper proposes a transfer-learning-based approach for task-specific grasp planning for robotic assembly. The proposed method is based on an automated pipeline that quickly and automatically generates a small-scale task-specific synthetic grasp dataset using Graspit! and Blender. This dataset is utilized to fine-tune pre-trained grasp quality convolutional neural networks (GQCNNs). The aim is to train GQCNNs that can predict grasps which do not result in a collision when placing the objects. Consequently, this paper focuses on the geometric feasibility of the predicted grasps and does not consider the dynamic effects. The fine-tuned GQCNNs are evaluated using the Moveit! Task Constructor motion planning framework, which enables the automated inspection of whether the motion planning for a task is feasible given a predicted grasp and, if not, which part of the task is responsible for the failure. Our results suggest that fine-tuning GQCNN models can result in superior grasp-planning performance (0.9 success rate compared to 0.65) in the context of an assembly task. Our method can be used to rapidly attain new task-specific grasp policies for flexible robotic assembly applications.},
136
+ DOI = {10.3390/app13010525}
137
+ }
138
+ ```