Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -39,4 +39,43 @@ configs:
|
|
39 |
path: mantis_eval/test-*
|
40 |
---
|
41 |
|
42 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
path: mantis_eval/test-*
|
40 |
---
|
41 |
|
42 |
+
## Overview
|
43 |
+
This is a newly curated dataset to evaluate multimodal language models' capability to reason over multiple images. More details are shown in https://tiger-ai-lab.github.io/Mantis/.
|
44 |
+
|
45 |
+
### Statistics
|
46 |
+
This evaluation dataset contains more than 200 human-annotated challenging multi-image reasoning problems.
|
47 |
+
|
48 |
+
### Leaderboard
|
49 |
+
| Models | Size | Mantis-Eval |
|
50 |
+
|-------------------|------|-------------|
|
51 |
+
| GPT-4V | - | 62.67 |
|
52 |
+
| Mantis-SigLIP | 8B | 59.45 |
|
53 |
+
| Mantis-Idefics2 | 8B | 57.14 |
|
54 |
+
| Mantis-CLIP | 8B | 55.76 |
|
55 |
+
| VILA | 8B | 51.15 |
|
56 |
+
| BLIP-2 | 13B | 49.77 |
|
57 |
+
| Idefics2 | 8B | 48.85 |
|
58 |
+
| InstructBLIP | 13B | 45.62 |
|
59 |
+
| LLaVA-V1.6 | 7B | 45.62 |
|
60 |
+
| CogVLM | 17B | 45.16 |
|
61 |
+
| Qwen-VL-Chat | 7B | 39.17 |
|
62 |
+
| Emu2-Chat | 37B | 37.79 |
|
63 |
+
| VideoLLaVA | 7B | 35.04 |
|
64 |
+
| Mantis-Flamingo | 9B | 32.72 |
|
65 |
+
| LLaVA-v1.5 | 7B | 31.34 |
|
66 |
+
| Kosmos2 | 1.6B | 30.41 |
|
67 |
+
| Idefics1 | 9B | 28.11 |
|
68 |
+
| Fuyu | 8B | 27.19 |
|
69 |
+
| OpenFlamingo | 9B | 12.44 |
|
70 |
+
| Otter-Image | 9B | 14.29 |
|
71 |
+
|
72 |
+
### Citation
|
73 |
+
If you are using this dataset, please cite our work with
|
74 |
+
```
|
75 |
+
@inproceedings{Jiang2024MANTISIM,
|
76 |
+
title={MANTIS: Interleaved Multi-Image Instruction Tuning},
|
77 |
+
author={Dongfu Jiang and Xuan He and Huaye Zeng and Cong Wei and Max W.F. Ku and Qian Liu and Wenhu Chen},
|
78 |
+
publisher={arXiv2405.01483}
|
79 |
+
year={2024},
|
80 |
+
}
|
81 |
+
```
|