File size: 5,228 Bytes
d7acfcb 43afde2 d7acfcb af24b82 43afde2 d7acfcb 9d9462b d7acfcb 9d9462b 3370b90 43afde2 9d9462b d7acfcb 9d9462b 56c51f9 b0b8068 56c51f9 d830889 b0b8068 d830889 230e55c d830889 56c51f9 b0b8068 56c51f9 d830889 56c51f9 e7f3863 56c51f9 d830889 56c51f9 d830889 56c51f9 b0b8068 56c51f9 e7f3863 b0b8068 e7f3863 56c51f9 d830889 56c51f9 e7f3863 56c51f9 b0b8068 56c51f9 d830889 b0b8068 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
---
language:
- en
size_categories:
- n<1K
task_categories:
- image-to-text
- visual-question-answering
dataset_info:
features:
- name: image
dtype: image
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: category
dtype: string
- name: reasoning
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 50339
num_examples: 4
- name: test
num_bytes: 24579079
num_examples: 1000
download_size: 24495650
dataset_size: 24629418
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: cc-by-nc-sa-4.0
---
# IllusionVQA: Optical Illusion Dataset
[Project Page](https://illusionvqa.github.io/) |
[Paper](https://arxiv.org/abs/2403.15952) |
[Github](https://github.com/csebuetnlp/IllusionVQA/)
## TL;DR
IllusionVQA is a dataset of optical illusions and hard-to-interpret scenes designed to test the capability of Vision Language Models in comprehension and soft localization tasks. GPT4V achieved 62.99% accuracy on comprehension and 49.7% on localization, while humans achieved 91.03% and 100% respectively.
## Usage
```python
from datasets import load_dataset
import base64
from openai import OpenAI
import os
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
def encode_image(pil_image):
temp_name = "temp.jpg"
pil_image.save(temp_name)
with open(temp_name, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
def construct_mcq(options, correct_option):
correct_option_letter = None
i = "a"
mcq = ""
for option in options:
if option == correct_option:
correct_option_letter = i
mcq += f"{i}. {option}\n"
i = chr(ord(i) + 1)
mcq = mcq[:-1]
return mcq, correct_option_letter
def add_row(content, data, i, with_answer=False):
mcq, correct_option_letter = construct_mcq(data["options"], data["answer"])
content.append({ "type": "text",
"text": "Image " + str(i) + ": " + data["question"] + "\n" + mcq })
content.append({ "type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{encode_image(data['image'])}",
"detail": "low"}})
if with_answer:
content.append({"type": "text", "text": "Answer {}: ".format(i) + correct_option_letter})
else:
content.append({"type": "text", "text": "Answer {}: ".format(i), })
return content
dataset = load_dataset("csebuetnlp/illusionVQA-Comprehension")
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
content = [{
"type": "text",
"text": "You'll be given an image, an instruction and some choices. You have to select the correct one. Do not explain your reasoning. Answer with the option's letter from the given choices directly. Here are a few examples:",
}]
### Add a few examples
for i, data in enumerate(dataset["train"], 1):
content = add_row(content, data, i, with_answer=True)
content.append({"type": "text", "text": "Now you try it!",})
next_idx = i + 1
### Add the test data
test_data = dataset["test"][0]
content_t = add_row(content.copy(), test_data, next_idx, with_answer=False)
### Get the answer from GPT-4
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=[{"role": "user","content": content_t,}],
max_tokens=5,
)
gpt4_answer = response.choices[0].message.content
print(gpt4_answer)
```
## License
This dataset is made available for non-commercial research purposes only under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). The dataset may not be used for training models. The dataset contains images collected from the internet. While permission has been obtained from some of the images' creators, permission has not yet been received from all creators. If you believe any image in this dataset is used without proper permission and you are the copyright holder, please email <a href="mailto:sameen2080@gmail.com">Haz Sameen Shahgir</a> to request the removal of the image from the dataset.
The dataset creator makes no representations or warranties regarding the copyright status of the images in the dataset. The dataset creator shall not be held liable for any unauthorized use of copyrighted material that may be contained in the dataset.
You agree to the terms and conditions specified in this license by downloading or using this dataset. If you do not agree with these terms, do not download or use the dataset.
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a>
### Citation
```
@article{shahgir2024illusionvqa,
title={IllusionVQA: A Challenging Optical Illusion Dataset for Vision Language Models},
author={Haz Sameen Shahgir and Khondker Salman Sayeed and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yue Dong and Rifat Shahriyar},
year={2024},
url={https://arxiv.org/abs/2403.15952},
}
``` |