Datasets:

Languages:
Chinese
ArXiv:
License:
The Dataset Viewer has been disabled on this dataset.

๐Ÿ–ผ๏ธ MULTI-Benchmark: Multimodal Understanding Leaderboard with Text and Images

MULTI

๐ŸŒ Website | ๐Ÿ“ƒ Paper | ๐Ÿค— Dataset | ๐Ÿ“ฎ Submit

็ฎ€ไฝ“ไธญๆ–‡ | English

๐Ÿ”ฅ News

  • [2024.3.4] We have released the evaluation page.
  • [2024.2.19] We have released the HuggingFace Page.
  • [2024.2.6] We have published our paper on arXiv.
  • [2023.12.7] We have released the code of our benchmark evaluation.
  • [2023.12.5] We have released the GitHub Page.

๐Ÿ“– Overview

Rapid progress in multimodal large language models (MLLMs) highlights the need to introduce challenging yet realistic benchmarks to the academic community, while existing benchmarks primarily focus on understanding simple natural images and short context. In this paper, we presentMULTI, as a cutting-edge benchmark for evaluating MLLMs on understanding complex tables and images, and reasoning with long context. MULTI provides multimodal inputs and requires responses that are either precise or open-ended, reflecting real-life examination styles. MULTI includes over 18,000 questions and challenges MLLMs with a variety of tasks, ranging from formula derivation to image detail analysis and cross-modality reasoning. We also introduceMULTI-Elite, a 500-question selected hard subset, and MULTI-Extend, with more than 4,500 external knowledge context pieces. Our evaluation indicates significant potential for MLLM advancement, with GPT-4V achieving a 63.7% accuracy rate on MULTI, in contrast to other MLLMs scoring between 28.5% and 55.3%. MULTI serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.

๐Ÿ† Leaderboard

Modality Model Version Overall MULTI-Elite
๐Ÿ–ผ๏ธ GPT-4V gpt-4-vision-preview 63.7 14.0
๐Ÿ–ผ๏ธ Yi-VL Yi-34B-Chat 55.3 26.2
๐Ÿ–ผ๏ธ Gemini Vision gemini-pro-vision 53.7 12.4
๐Ÿ“ƒ Gemini gemini-pro 52.2 10.5
๐Ÿ“ƒ GPT-4 gpt-4-1106-preview 50.2 5.8
๐Ÿ“ƒ DFM-2.0 dfm-2.0-70b-preview 49.7 18.0
๐Ÿ–ผ๏ธ InternVL InternVL-Chat-Chinese-V1.1 44.9 20.7
๐Ÿ–ผ๏ธ Qwen-VL Qwen-VL-Chat 39.0 10.5
๐Ÿ“ƒ ChatGPT gpt-3.5-turbo-1106 35.9 4.7
๐Ÿ–ผ๏ธ VisCPM VisCPM-Chat 33.4 13.0
๐Ÿ“ƒ MOSS moss-moon-003-sft 32.6 13.1
๐Ÿ–ผ๏ธ VisualGLM visualglm-6b 31.1 12.8
๐Ÿ–ผ๏ธ Chinese-LLaVA Chinese-LLaVA-Cllama2 28.5 12.3

โฌ Download

You can simply download data using the following command:

cd eval
python download_data.py

The structure of ./data should be something like:

./data
โ”œโ”€โ”€ images                                       # folder containing images
โ”œโ”€โ”€ problem_v1.2.2_20240212_release.json         # MULTI
โ”œโ”€โ”€ knowledge_v1.2.2_20240212_release.json       # MULTI-Extend
โ”œโ”€โ”€ hard_list_v1.2.1_20240206.json               # MULTI-Elite
โ””โ”€โ”€ captions_v1.2.0_20231217.csv                 # image captions generated by BLIP-6.7b

๐Ÿ“ How to Evaluate

We provide a unified evaluation framework in eval. Each file in eval/models contains an evaluator specified to one M/LLM, and implements a generate_answer method to receive a question as input and give out the answer of it.

cd eval
python eval.py -h # to list all supported arguments
python eval.py -l # to list all supported models

Environment Preparation Before Usage

Each evaluator requires its unique environment setting, and a universal environment may not work for all evaluators. Just follow the official guide. If the corresponding model runs well, then so should it fit in our framework.

You just need to install another two packages to run the evaluation code:

pip install tiktoken tqdm

If you just want to generate data for a specific setting (using --debug argument), this line above is all you need.

Running Evaluation

For a quick start, see these examples:

Test GPT-4V model on whole MULTI with multimodal input, using MULTI-Extend as external knowledge:

python eval.py \
  --problem_file ../data/problem_v1.2.2_20240212_release.json \
  --knowledge_file ../data/knowledge_v1.2.2_20240212_release.json \
  --questions_type 0,1,2,3 \
  --image_type 0,1,2 \
  --input_type 2 \
  --model gpt-4v \
  --model_version gpt-4-vision-preview \
  --api_key sk-************************************************

Test Qwen-VL model on MULTI-Elite with image caption input, skip all questions not containing images, evaluate only multiple-choice questions, automatically set cuda device:

python eval.py \
  --problem_file ../data/problem_v1.2.2_20240212_release.json \
  --subset ../data/hard_list_v1.2.1_20240206.json \
  --caption_file ../data/captions_v1.2.0_20231217.csv \
  --questions_type 0,1 \
  --image_type 1,2 \
  --input_type 1 \
  --model qwen-vl \
  --model_dir ../models/Qwen-VL-Chat

The evaluation script will generate a folder named results under the root directory, and the result will be saved in ../results/EXPERIMENT_NAME. During the evaluation, the script will save checkpoints in ../results/EXPERIMENT_NAME/checkpoints, you can delete them after the evaluation is done. If the evaluation is interrupted, you can continue from the last checkpoint:

python eval.py \
  --checkpoint_dir ../results/EXPERIMENT_NAME

Most of the arguments are saved in ../results/EXPERIMENT_NAME/args.json, so you can continue the evaluation without specifying all the arguments again. Please note that --api_key is not saved in args.json for security reasons, so you need to specify it again.

python eval.py \
  --checkpoint_dir ../results/EXPERIMENT_NAME \
  --api_key sk-************************************************

For more details of arguments, please use python eval.py -h, and refer to args.py and eval.py.

Add Support for Your Models

It's recommended to read the code of the other given evaluators in eval/models before your implementation.

Create class YourModelEvaluator and implement generate_answer(self, question:dict) to match the design supported in eval.py and eval.sh, which is anticipated to largely ease the coding process.

Do not forget to add their references into args.py for the convenience of usage.

You can execute model_tester.py in the eval folder to check the correctness of you implementation. Various problems including implementation errors, small bugs in code, and even wrong environment settings may cause failure of the evaluation. The examples provided in the file cover most kinds of cases presented in our benchmark. Feel free to change the code in it to debug your code๐Ÿ˜Š

python model_tester.py <args> # args are similar to the default settings above

Create Captions and OCR Data for Images

Generate captions or OCR data for images, and save them in csv with format below:

../data/images/czls/502_1.png,a cartoon drawing of a man standing in front of a large block
../data/images/czls/525_1.png,a chinese newspaper with the headline, china's new year
...

We provide two example scripts to generate captions (image_caption.py) and OCR data (image_ocr.py) for images.

๐Ÿ“ฎ How to Submit

You need to first prepare a UTF-8 encoded JSON file with the following format:

{
    "czsx_0_0": {
        "question_id": "czsx_0_0",
        "question_image_number": 1,
        "image_list": [...],            # optional
        "input_message": ...,           # optional
        "prediction": "C"
    },
    ...
}

If you evaluate the model with our official code, you can simply zip the prediction file prediction.json and the configuration file args.json in the experiment results folder . /results/EXPERIMENT_NAME in .zip format.

Then, you can submit your result to our evaluation page.

You are also welcomed to pull a request and contribute your code to our evaluation code. We will be very grateful for your contribution!

[Notice] Thank you for being so interested in the MULTI dataset! If you want to add your model in our leaderboard, please fill in this questionnaire, your information will be kept strictly confidential, so please feel free to fill it out. ๐Ÿค—

๐Ÿ“‘ Citation

If you find our work useful, please cite us!

@misc{zhu2024multi,
      title={{MULTI}: Multimodal Understanding Leaderboard with Text and Images}, 
      author={Zichen Zhu and Yang Xu and Lu Chen and Jingkai Yang and Yichuan Ma and Yiming Sun and Hailin Wen and Jiaqi Liu and Jinyu Cai and Yingzi Ma and Situo Zhang and Zihan Zhao and Liangtai Sun and Kai Yu},
      year={2024},
      eprint={2402.03173},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

๐Ÿ“ง Contact Us

If you have any questions, please feel free to contact us via email JamesZhutheThird@sjtu.edu.cn and xuyang0112@sjtu.edu.cn

Downloads last month
122