File size: 8,994 Bytes
6c63d40 7cfe58f 68f6523 71d8f85 8ffe7d8 3ed6e11 c6832c8 0ca6274 d5e99e3 fffc686 6e80dd4 7cfe58f 68f6523 71d8f85 8ffe7d8 3ed6e11 c6832c8 0ca6274 d5e99e3 fffc686 6e80dd4 6c63d40 39c6c69 6c63d40 39c6c69 6c63d40 59bf883 6c63d40 39c6c69 6c63d40 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 |
---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license: gpl-3.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
task_ids:
- multiple-choice-qa
- visual-question-answering
- multi-class-classification
tags:
- multi-modal-qa
- figure-qa
- vqa
- scientific-figure
- geometry-diagram
- chart
- chemistry
dataset_info:
features:
- name: image_path
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: prompt_reasoning
dtype: string
- name: prompt_no_reasoning
dtype: string
- name: image_category
dtype: string
- name: task_category
dtype: string
- name: question_type
dtype: string
- name: response_options
sequence: string
- name: source
dtype: string
- name: id
dtype: string
- name: decoded_image
dtype: image
splits:
- name: geometry__triangle
num_bytes: 242889.0
num_examples: 50
- name: geometry__quadrilateral
num_bytes: 210787.0
num_examples: 50
- name: geometry__length
num_bytes: 271748.0
num_examples: 50
- name: geometry__angle
num_bytes: 255692.0
num_examples: 50
- name: geometry__area
num_bytes: 255062.0
num_examples: 50
- name: geometry__diameter_radius
num_bytes: 269208.0
num_examples: 50
- name: chemistry__shape_single
num_bytes: 1198593.0
num_examples: 50
- name: chemistry__shape_multi
num_bytes: 1855862.0
num_examples: 50
- name: charts__extraction
num_bytes: 3735234.0
num_examples: 50
- name: charts__intersection
num_bytes: 2896121.0
num_examples: 50
download_size: 8276769
dataset_size: 11191196.0
configs:
- config_name: default
data_files:
- split: geometry__triangle
path: data/geometry__triangle-*
- split: geometry__quadrilateral
path: data/geometry__quadrilateral-*
- split: geometry__length
path: data/geometry__length-*
- split: geometry__angle
path: data/geometry__angle-*
- split: geometry__area
path: data/geometry__area-*
- split: geometry__diameter_radius
path: data/geometry__diameter_radius-*
- split: chemistry__shape_single
path: data/chemistry__shape_single-*
- split: chemistry__shape_multi
path: data/chemistry__shape_multi-*
- split: charts__extraction
path: data/charts__extraction-*
- split: charts__intersection
path: data/charts__intersection-*
---
# VisOnlyQA
This repository contains the code and data for the paper "[VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information](https://arxiv.org/abs/2412.00947)".
VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances.
* Datasets:
* VisOnlyQA is available at [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) 🔥🔥🔥
* VisOnlyQA in VLMEvalKit is different from the original one. Refer to [this section](#vlmevalkit) for details.
* Hugging Face
* Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real)
* Eval-Synthetic: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic)
* Train: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train)
* Code: [https://github.com/psunlpgroup/VisOnlyQA](https://github.com/psunlpgroup/VisOnlyQA)
<p align="center">
<img src="readme_figures/accuracy_radar_chart.png" width="500">
</p>
```bibtex
@misc{kamoi2024visonlyqa,
title={VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information},
author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang},
year={2024},
journal={arXiv preprint arXiv:2412.00947}
}
```
## Dataset
VisOnlyQA is provided in two formats: VLMEvalKit and Hugging Face Dataset. You can use either of them to evaluate your models and report the results in your papers. However, when you report the results, please explicitly mention which version of the dataset you used because the two versions are different.
### Examples
<p align="center">
<img src="readme_figures/examples.png" width="800">
</p>
### VLMEvalKit
[VLMEvalKit](https://github.com/open-compass/VLMEvalKit) provides one-command evaluation. However, VLMEvalKit is not designed to reproduce the results in the paper. We welcome using it to report the results on VisOnlyQA in your papers, but please explicitly mention that you used VLMEvalKit.
The major differences are:
* VisOnlyQA on VLMEvalKit does not include the `chemistry__shape_multi` split
* VLMEvalKit uses different prompts and postprocessing.
Refer to [this document](https://github.com/open-compass/VLMEvalKit/blob/main/docs/en/Quickstart.md) for the installation and setup of VLMEvalKit. After setting up the environment, you can evaluate any supported models on VisOnlyQA with the following command (this example is for InternVL2-26B).
```bash
python run.py --data VisOnlyQA-VLMEvalKit --model InternVL2-26B
```
### Hugging Face Dataset
The original VisOnlyQA dataset is provided in Hugging Face Dataset. If you want to reproduce the results in our paper, please use this version and code in the GitHub repository.
* Eval-Real: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Real)
* 500 instances for questions on figures in existing datasets (e.g., MathVista, MMMU, and CharXiv)
* Eval-Synthetic: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Eval_Synthetic)
* 700 instances for questions on synthetic figures
* Train: [https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train](https://huggingface.co/datasets/ryokamoi/VisOnlyQA_Train)
* 70,000 instances for training (synthetic figures)
[dataset](https://github.com/psunlpgroup/VisOnlyQA/tree/main/dataset) folder of the GitHub repository includes identical datasets, except for the training data.
```python
from datasets import load_dataset
real_eval = load_dataset("ryokamoi/VisOnlyQA_Eval_Real")
real_synthetic = load_dataset("ryokamoi/VisOnlyQA_Eval_Synthetic")
# Splits
print(real_eval.keys())
# dict_keys(['geometry__triangle', 'geometry__quadrilateral', 'geometry__length', 'geometry__angle', 'geometry__area', 'geometry__diameter_radius', 'chemistry__shape_single', 'chemistry__shape_multi', 'charts__extraction', 'charts__intersection'])
print(real_synthetic.keys())
# dict_keys(['syntheticgeometry__triangle', 'syntheticgeometry__quadrilateral', 'syntheticgeometry__length', 'syntheticgeometry__angle', 'syntheticgeometry__area', '3d__size', '3d__angle'])
# Prompt
print(real_eval['geometry__triangle'][0]['prompt_no_reasoning'])
# There is no triangle ADP in the figure. True or False?
# A triangle is a polygon with three edges and three vertices, which are explicitly connected in the figure.
# Your response should only include the final answer (True, False). Do not include any reasoning or explanation in your response.
# Image
print(real_eval['geometry__triangle'][0]['decoded_image'])
# <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=103x165 at 0x7FB4F83236A0>
# Answer
print(real_eval['geometry__triangle'][0]['answer'])
# False
```
### Data Format
Each instance of VisOnlyQA dataset has the following attributes:
#### Features
* `decoded_image`: [PIL.Image] Input image
* `question`: [string] Question (without instruction)
* `prompt_reasoning`: [string] Prompt with intstruction to use chain-of-thought
* `prompt_no_reasoning`: [string] Prompt with intstruction **not** to use chain-of-thought
* `answer`: [string] Correct answer (e.g., `True`, `a`)
#### Metadata
* `image_path`: [string] Path to the image file
* `image_category`: [string] Category of the image (e.g., `geometry`, `chemistry`)
* `question_type`: [string] `single_answer` or `multiple answers`
* `task_category`: [string] Category of the task (e.g., `triangle`)
* `response_options`: [List[string]] Multiple choice options (e.g., `['True', 'False']`, `['a', 'b', 'c', 'd', 'e']`)
* `source`: [string] Source dataset
* `id`: [string] Unique ID
### Statistics
<p align="center">
<img src="readme_figures/stats.png" width="800">
</p>
## License
Please refer to [LICENSE.md](./LICENSE.md).
## Contact
If you have any questions, feel free to open an issue or reach out directly to [Ryo Kamoi](https://ryokamoi.github.io/) (ryokamoi@psu.edu).
|