Update README.md
Browse files
README.md
CHANGED
@@ -78,14 +78,6 @@ dataset_info:
|
|
78 |
- name: ground_truth
|
79 |
dtype: string
|
80 |
configs:
|
81 |
-
- config_name: object_detection_single
|
82 |
-
data_files:
|
83 |
-
- split: val
|
84 |
-
path: single/object_detection_val.parquet
|
85 |
-
- config_name: object_detection_pairs
|
86 |
-
data_files:
|
87 |
-
- split: val
|
88 |
-
path: pairs/object_detection_val.parquet
|
89 |
- config_name: object_recognition_single
|
90 |
data_files:
|
91 |
- split: val
|
@@ -94,6 +86,14 @@ configs:
|
|
94 |
data_files:
|
95 |
- split: val
|
96 |
path: pairs/recognition_val.parquet
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
- config_name: spatial_reasoning_lrtb_single
|
98 |
data_files:
|
99 |
- split: val
|
@@ -102,14 +102,14 @@ configs:
|
|
102 |
data_files:
|
103 |
- split: val
|
104 |
path: pairs/spatial_reasoning_val.parquet
|
105 |
-
- config_name:
|
106 |
data_files:
|
107 |
- split: val
|
108 |
-
path: single/
|
109 |
-
- config_name:
|
110 |
data_files:
|
111 |
- split: val
|
112 |
-
path: pairs/
|
113 |
---
|
114 |
|
115 |
A key question for understanding multimodal performance is analyzing the ability for a model to have basic
|
@@ -135,29 +135,30 @@ sets of object classes (either 20 single objects or 20 pairs of objects), with f
|
|
135 |
backgrounds classes, and we sample 4 instances of object and background. This results in 1280 images per
|
136 |
condition and sub-task.
|
137 |
|
138 |
-
__Object
|
139 |
|
140 |
Answer type: Open-ended
|
141 |
|
142 |
-
Example for "single"
|
143 |
|
144 |
-
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "
|
145 |
|
146 |
Example for "pairs":
|
147 |
|
148 |
-
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "
|
149 |
|
150 |
-
|
151 |
|
152 |
-
|
153 |
|
154 |
Example for "single"
|
155 |
|
156 |
-
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "What
|
157 |
|
158 |
Example for "pairs":
|
159 |
|
160 |
-
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "What objects are in this image?", "ground_truth": "['keyboard', 'surfboard']"}
|
|
|
161 |
|
162 |
__Spatial Reasoning__
|
163 |
|
@@ -178,14 +179,14 @@ __Spatial Reasoning__
|
|
178 |
"single": (left, right, top, bottom)
|
179 |
"pairs": (left, right, above, below)
|
180 |
|
181 |
-
|
182 |
|
183 |
-
Answer type: Open-ended
|
184 |
|
185 |
-
Example for "single"
|
186 |
|
187 |
-
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "
|
188 |
|
189 |
Example for "pairs":
|
190 |
|
191 |
-
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "
|
|
|
78 |
- name: ground_truth
|
79 |
dtype: string
|
80 |
configs:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
81 |
- config_name: object_recognition_single
|
82 |
data_files:
|
83 |
- split: val
|
|
|
86 |
data_files:
|
87 |
- split: val
|
88 |
path: pairs/recognition_val.parquet
|
89 |
+
- config_name: visual_prompting_single
|
90 |
+
data_files:
|
91 |
+
- split: val
|
92 |
+
path: single/visual_prompting_val.parquet
|
93 |
+
- config_name: visual_prompting_pairs
|
94 |
+
data_files:
|
95 |
+
- split: val
|
96 |
+
path: pairs/visual_prompting_val.parquet
|
97 |
- config_name: spatial_reasoning_lrtb_single
|
98 |
data_files:
|
99 |
- split: val
|
|
|
102 |
data_files:
|
103 |
- split: val
|
104 |
path: pairs/spatial_reasoning_val.parquet
|
105 |
+
- config_name: object_detection_single
|
106 |
data_files:
|
107 |
- split: val
|
108 |
+
path: single/object_detection_val.parquet
|
109 |
+
- config_name: object_detection_pairs
|
110 |
data_files:
|
111 |
- split: val
|
112 |
+
path: pairs/object_detection_val.parquet
|
113 |
---
|
114 |
|
115 |
A key question for understanding multimodal performance is analyzing the ability for a model to have basic
|
|
|
135 |
backgrounds classes, and we sample 4 instances of object and background. This results in 1280 images per
|
136 |
condition and sub-task.
|
137 |
|
138 |
+
__Object Recognition__
|
139 |
|
140 |
Answer type: Open-ended
|
141 |
|
142 |
+
Example for "single"
|
143 |
|
144 |
+
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "What objects are in this image?", "ground_truth": "book"}
|
145 |
|
146 |
Example for "pairs":
|
147 |
|
148 |
+
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "What objects are in this image?", "ground_truth": "['keyboard', 'surfboard']"}
|
149 |
|
150 |
+
__Visual Prompting__
|
151 |
|
152 |
+
Answer type: Open-ended
|
153 |
|
154 |
Example for "single"
|
155 |
|
156 |
+
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "What object is in the red box in this image?", "ground_truth": "book"}
|
157 |
|
158 |
Example for "pairs":
|
159 |
|
160 |
+
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "What objects are in the red and yellow box in this image?", "ground_truth": "['keyboard', 'surfboard']"}
|
161 |
+
|
162 |
|
163 |
__Spatial Reasoning__
|
164 |
|
|
|
179 |
"single": (left, right, top, bottom)
|
180 |
"pairs": (left, right, above, below)
|
181 |
|
182 |
+
__Object Detection__
|
183 |
|
184 |
+
Answer type: Open-ended
|
185 |
|
186 |
+
Example for "single":
|
187 |
|
188 |
+
{"id": "0", "image": "val/book/left/burial_chamber/0000083_0000010.jpg", "prompt": "You are an object detection model that aims to detect all the objects in the image.\n\nDefinition of Bounding Box Coordinates:\n\nThe bounding box coordinates (a, b, c, d) represent the normalized positions of the object within the image:\n\na: The x-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's left boundary. The a ranges from 0.00 to 1.00 with precision of 0.01.\nb: The y-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's top boundary. The b ranges from 0.00 to 1.00 with precision of 0.01.\nc: The x-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's right boundary. The c ranges from 0.00 to 1.00 with precision of 0.01.\nd: The y-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's bottom boundary. The d ranges from 0.00 to 1.00 with precision of 0.01.\n\nThe top-left of the image has coordinates (0.00, 0.00). The bottom-right of the image has coordinates (1.00, 1.00).\n\nInstructions:\n1. Specify any particular regions of interest within the image that should be prioritized during object detection.\n2. For all the specified regions that contain the objects, generate the object's category type, bounding box coordinates, and your confidence for the prediction. The bounding box coordinates (a, b, c, d) should be as precise as possible. Do not only output rough coordinates such as (0.1, 0.2, 0.3, 0.4).\n3. If there are more than one object of the same category, output all of them.\n4. Please ensure that the bounding box coordinates are not examples. They should really reflect the position of the objects in the image.\n5.\nReport your results in this output format:\n(a, b, c, d) - category for object 1 - confidence\n(a, b, c, d) - category for object 2 - confidence\n...\n(a, b, c, d) - category for object n - confidence."}
|
189 |
|
190 |
Example for "pairs":
|
191 |
|
192 |
+
{"id": "0", "image": "val/keyboard_surfboard/left/auto_showroom/0000023_0000044_0000030.jpg", "prompt": "You are an object detection model that aims to detect all the objects in the image.\n\nDefinition of Bounding Box Coordinates:\n\nThe bounding box coordinates (a, b, c, d) represent the normalized positions of the object within the image:\n\na: The x-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's left boundary. The a ranges from 0.00 to 1.00 with precision of 0.01.\nb: The y-coordinate of the top-left corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's top boundary. The b ranges from 0.00 to 1.00 with precision of 0.01.\nc: The x-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image width. It indicates the position from the left side of the image to the object's right boundary. The c ranges from 0.00 to 1.00 with precision of 0.01.\nd: The y-coordinate of the bottom-right corner of the bounding box, expressed as a percentage of the image height. It indicates the position from the top of the image to the object's bottom boundary. The d ranges from 0.00 to 1.00 with precision of 0.01.\n\nThe top-left of the image has coordinates (0.00, 0.00). The bottom-right of the image has coordinates (1.00, 1.00).\n\nInstructions:\n1. Specify any particular regions of interest within the image that should be prioritized during object detection.\n2. For all the specified regions that contain the objects, generate the object's category type, bounding box coordinates, and your confidence for the prediction. The bounding box coordinates (a, b, c, d) should be as precise as possible. Do not only output rough coordinates such as (0.1, 0.2, 0.3, 0.4).\n3. If there are more than one object of the same category, output all of them.\n4. Please ensure that the bounding box coordinates are not examples. They should really reflect the position of the objects in the image.\n5.\nReport your results in this output format:\n(a, b, c, d) - category for object 1 - confidence\n(a, b, c, d) - category for object 2 - confidence\n...\n(a, b, c, d) - category for object n - confidence."}
|