Papers
arxiv:2406.11833

MMDU: A Multi-Turn Multi-Image Dialog Understanding Benchmark and Instruction-Tuning Dataset for LVLMs

Published on Jun 17
· Submitted by myownskyW7 on Jun 18
#1 Paper of the day
Authors:
,
,
,
,
,

Abstract

Generating natural and meaningful responses to communicate with multi-modal human inputs is a fundamental capability of Large Vision-Language Models(LVLMs). While current open-source LVLMs demonstrate promising performance in simplified scenarios such as single-turn single-image input, they fall short in real-world conversation scenarios such as following instructions in a long context history with multi-turn and multi-images. Existing LVLM benchmarks primarily focus on single-choice questions or short-form responses, which do not adequately assess the capabilities of LVLMs in real-world human-AI interaction applications. Therefore, we introduce MMDU, a comprehensive benchmark, and MMDU-45k, a large-scale instruction tuning dataset, designed to evaluate and improve LVLMs' abilities in multi-turn and multi-image conversations. We employ the clustering algorithm to ffnd the relevant images and textual descriptions from the open-source Wikipedia and construct the question-answer pairs by human annotators with the assistance of the GPT-4o model. MMDU has a maximum of 18k image+text tokens, 20 images, and 27 turns, which is at least 5x longer than previous benchmarks and poses challenges to current LVLMs. Our in-depth analysis of 15 representative LVLMs using MMDU reveals that open-source LVLMs lag behind closed-source counterparts due to limited conversational instruction tuning data. We demonstrate that ffne-tuning open-source LVLMs on MMDU-45k signiffcantly address this gap, generating longer and more accurate conversations, and improving scores on MMDU and existing benchmarks (MMStar: +1.1%, MathVista: +1.5%, ChartQA:+1.2%). Our contributions pave the way for bridging the gap between current LVLM models and real-world application demands. This project is available at https://github.com/Liuziyu77/MMDU.

Community

Paper submitter
•
edited 11 days ago

nice work🥳

nice work+1🥳

Will existing multi-image multi-turn MLLMs like Mantis, VILA, etc. , be included in the paper? They are all designed for multi-image use cases and wondering how these model will perform on your test data.

·

Thank you for your suggestion. However, considering that works such as Mantis are concurrent, we have not yet had the chance to include them in our paper. We may consider incorporating them into our leaderboard in the future. Additionally, it is important to note that MMDU is not just a multi-image benchmark; it is also a benchmark for multi-turn dialogue and long context understanding. MMDU-45k aims not only to enhance the model's multi-image capabilities but also to improve its abilities in multi-turn dialogue and long context comprehension.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.11833 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.11833 in a Space README.md to link it from this page.

Collections including this paper 10