Mantis-Eval / README.md
wenhu's picture
Update README.md
834c93a verified
metadata
language:
  - en
license: apache-2.0
size_categories:
  - n<1K
task_categories:
  - question-answering
pretty_name: Mantis-Eval
dataset_info:
  - config_name: mantis_eval
    features:
      - name: id
        dtype: string
      - name: question_type
        dtype: string
      - name: question
        dtype: string
      - name: images
        sequence: image
      - name: options
        sequence: string
      - name: answer
        dtype: string
      - name: data_source
        dtype: string
      - name: category
        dtype: string
    splits:
      - name: test
        num_bytes: 479770102
        num_examples: 217
    download_size: 473031413
    dataset_size: 479770102
configs:
  - config_name: mantis_eval
    data_files:
      - split: test
        path: mantis_eval/test-*

Overview

This is a newly curated dataset to evaluate multimodal language models' capability to reason over multiple images. More details are shown in https://tiger-ai-lab.github.io/Mantis/.

Statistics

This evaluation dataset contains more than 200 human-annotated challenging multi-image reasoning problems.

Leaderboard

We list the current results as follows:

Models Size Mantis-Eval
GPT-4V - 62.67
Mantis-SigLIP 8B 59.45
Mantis-Idefics2 8B 57.14
Mantis-CLIP 8B 55.76
VILA 8B 51.15
BLIP-2 13B 49.77
Idefics2 8B 48.85
InstructBLIP 13B 45.62
LLaVA-V1.6 7B 45.62
CogVLM 17B 45.16
Qwen-VL-Chat 7B 39.17
Emu2-Chat 37B 37.79
VideoLLaVA 7B 35.04
Mantis-Flamingo 9B 32.72
LLaVA-v1.5 7B 31.34
Kosmos2 1.6B 30.41
Idefics1 9B 28.11
Fuyu 8B 27.19
OpenFlamingo 9B 12.44
Otter-Image 9B 14.29

Citation

If you are using this dataset, please cite our work with

@inproceedings{Jiang2024MANTISIM,
  title={MANTIS: Interleaved Multi-Image Instruction Tuning},
  author={Dongfu Jiang and Xuan He and Huaye Zeng and Cong Wei and Max W.F. Ku and Qian Liu and Wenhu Chen},
  publisher={arXiv2405.01483}
  year={2024},
}