|
--- |
|
dataset_info: |
|
features: |
|
- name: question_id |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: choices |
|
dtype: string |
|
- name: answer |
|
dtype: string |
|
- name: query_image |
|
dtype: image |
|
- name: choice_image_0 |
|
dtype: image |
|
- name: choice_image_1 |
|
dtype: image |
|
- name: ques_type |
|
dtype: string |
|
- name: label |
|
dtype: string |
|
- name: grade |
|
dtype: string |
|
- name: skills |
|
dtype: string |
|
splits: |
|
- name: val |
|
num_bytes: 329185883.464 |
|
num_examples: 21488 |
|
- name: test |
|
num_bytes: 333201645.625 |
|
num_examples: 21489 |
|
download_size: 667286379 |
|
dataset_size: 662387529.089 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: val |
|
path: data/val-* |
|
- split: test |
|
path: data/test-* |
|
--- |
|
|
|
|
|
<p align="center" width="100%"> |
|
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%"> |
|
</p> |
|
|
|
# Large-scale Multi-modality Models Evaluation Suite |
|
|
|
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval` |
|
|
|
π [Homepage](https://lmms-lab.github.io/) | π [Documentation](docs/README.md) | π€ [Huggingface Datasets](https://huggingface.co/lmms-lab) |
|
|
|
# This Dataset |
|
|
|
This is a formatted version of [ICONQA](https://iconqa.github.io/). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models. |
|
|
|
``` |
|
@inproceedings{lu2021iconqa, |
|
title = {IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning}, |
|
author = {Lu, Pan and Qiu, Liang and Chen, Jiaqi and Xia, Tony and Zhao, Yizhou and Zhang, Wei and Yu, Zhou and Liang, Xiaodan and Zhu, Song-Chun}, |
|
booktitle = {The 35th Conference on Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks}, |
|
year = {2021} |
|
} |
|
``` |