--- license: apache-2.0 paperswithcode_id: marvel pretty_name: MARVEL (Multidimensional Abstraction and Reasoning through Visual Evaluation and Learning) task_categories: - visual-question-answering language: - en size_categories: - n<1K --- ## Dataset Details ### Dataset Description MARVEL is a new comprehensive benchmark dataset that evaluates multi-modal large language models' abstract reasoning abilities in six patterns across five different task configurations, revealing significant performance gaps between humans and SoTA MLLMs. ![image](./marvel_illustration.jpeg) ### Dataset Sources [optional] - **Repository:** https://github.com/1171-jpg/MARVEL_AVR - **Paper [optional]:** https://arxiv.org/abs/2404.13591 - **Demo [optional]:** https://marvel770.github.io/ ## Uses Evaluations for multi-modal large language models' abstract reasoning abilities. ## Dataset Structure The directory **images** keeps all images, and the file **marvel_labels.jsonl** provides annotations and explanations for all questions. ### Fields - **id** is of ID of the question - **pattern** is the high-level pattern category of the question - **task_configuration** is the task configuration of the question - **avr_question** is the text of the AVR question - **answer** is the answer to the AVR question - **explanation** is the textual reasoning process to answer the question - **f_perception_question** is the fine-grained perception question - **f_perception_answer** is the answer to the fine-grained perception question - **f_perception_distractor** is the distractor of the fine-grained perception question - **c_perception_question_tuple** is a list of coarse-grained perception questions - **c_perception_answer_tuple** is a list of answers to the coarse-grained perception questions - **file** is the path to the image of the question ## Citation [optional] **BibTeX:** ``` @article{jiang2024marvel, title={MARVEL: Multidimensional Abstraction and Reasoning through Visual Evaluation and Learning}, author={Jiang, Yifan and Zhang, Jiarui and Sun, Kexuan and Sourati, Zhivar and Ahrabian, Kian and Ma, Kaixin and Ilievski, Filip and Pujara, Jay}, journal={arXiv preprint arXiv:2404.13591}, year={2024} } ```