Datasets:

Languages:
Chinese
ArXiv:
License:
小臣子吃大橙子 commited on
Commit
cd1bc51
·
1 Parent(s): f315363

update README

Browse files
Files changed (2) hide show
  1. README.md +9 -2
  2. README_zh.md +8 -6
README.md CHANGED
@@ -12,7 +12,8 @@ viewer: False
12
 
13
  ![MULTI](./docs/static/images/overview.png)
14
 
15
- 🌐 [Website](https://OpenDFM.github.io/MULTI-Benchmark/) | 📃 [Paper](https://arxiv.org/abs/2402.03173/) | 🤗 [Dataset](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) | 📮 [Submit](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)
 
16
 
17
  [简体中文](./README_zh.md) | English
18
 
@@ -20,6 +21,8 @@ viewer: False
20
 
21
  ## 🔥 News
22
 
 
 
23
  - **[2024.3.4]** We have released the [evaluation page](https://OpenDFM.github.io/MULTI-Benchmark/static/pages/submit.html).
24
  - **[2024.2.19]** We have released the [HuggingFace Page](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/).
25
  - **[2024.2.6]** We have published our [paper](https://arxiv.org/abs/2402.03173/) on arXiv.
@@ -28,7 +31,11 @@ viewer: False
28
 
29
  ## 📖 Overview
30
 
31
- Rapid progress in multimodal large language models (MLLMs) highlights the need to introduce challenging yet realistic benchmarks to the academic community, while existing benchmarks primarily focus on understanding simple natural images and short context. In this paper, we present***MULTI***, as a cutting-edge benchmark for evaluating MLLMs on understanding complex tables and images, and reasoning with long context. **MULTI** provides multimodal inputs and requires responses that are either precise or open-ended, reflecting real-life examination styles. **MULTI** includes over 18,000 questions and challenges MLLMs with a variety of tasks, ranging from formula derivation to image detail analysis and cross-modality reasoning. We also introduce***MULTI-Elite***, a 500-question selected hard subset, and ***MULTI-Extend***, with more than 4,500 external knowledge context pieces. Our evaluation indicates significant potential for MLLM advancement, with GPT-4V achieving a **63.7%** accuracy rate on **MULTI**, in contrast to other MLLMs scoring between **28.5%** and **55.3%**. **MULTI** serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.
 
 
 
 
32
 
33
 
34
  ## ⏬ Download
 
12
 
13
  ![MULTI](./docs/static/images/overview.png)
14
 
15
+ 🌐 [Website](https://OpenDFM.github.io/MULTI-Benchmark/) | 📃 [Paper](https://arxiv.org/abs/2402.03173/) | 🤗 [Dataset](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) |
16
+ 🏆 [Leaderboard](https://opendfm.github.io/MULTI-Benchmark/#leaderboard) | 📮 [Submit](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)
17
 
18
  [简体中文](./README_zh.md) | English
19
 
 
21
 
22
  ## 🔥 News
23
 
24
+ - **[2025.1.7]** We have updated our [leaderboard](https://opendfm.github.io/MULTI-Benchmark/#leaderboard) with the latest results.
25
+ - **[2025.1.2]** We have updated MULTI to v1.3.1.
26
  - **[2024.3.4]** We have released the [evaluation page](https://OpenDFM.github.io/MULTI-Benchmark/static/pages/submit.html).
27
  - **[2024.2.19]** We have released the [HuggingFace Page](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/).
28
  - **[2024.2.6]** We have published our [paper](https://arxiv.org/abs/2402.03173/) on arXiv.
 
31
 
32
  ## 📖 Overview
33
 
34
+ The rapid development of multimodal large language models (MLLMs) raises the question of how they compare to human performance. While existing datasets often feature synthetic or
35
+ overly simplistic tasks, some models have already surpassed human expert baselines. In this paper, we present **MULTI**, a Chinese multimodal dataset derived from authentic examination
36
+ questions. Comprising over 18,000 carefully selected and refined questions, **MULTI** evaluates models using real-world examination standards, encompassing image-text comprehension,
37
+ complex reasoning, and knowledge recall. Additionally, We also introduce **MULTI-Elite**, a 500-question selected hard subset, and **MULTI-Extend** with more than 4,500 external knowledge
38
+ context pieces for testing in-context learning capabilities. **MULTI** serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.
39
 
40
 
41
  ## ⏬ Download
README_zh.md CHANGED
@@ -4,7 +4,7 @@
4
 
5
  ![MULTI](./docs/static/images/overview.png)
6
 
7
- 🌐 [网站](https://OpenDFM.github.io/MULTI-Benchmark/) | 📃 [论文](https://arxiv.org/abs/2402.03173/) | 🤗 [数据](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) | 📮 [提交](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)
8
 
9
  简体中文 | [English](./README.md)
10
 
@@ -12,6 +12,8 @@
12
 
13
  ## 🔥 新闻
14
 
 
 
15
  - **[2024.3.4]** 我们发布了[评测页面](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)。
16
  - **[2024.2.19]** 我们发布了[HuggingFace页面](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/)。
17
  - **[2024.2.6]** 我们在arXiv上发布了我们的[论文](https://arxiv.org/abs/2402.03173/)。
@@ -20,7 +22,7 @@
20
 
21
  ## 📖 介绍
22
 
23
- 在多模态大型语言模型(MLLMs)迅速进步的背景下,提出具有挑战性且符合现实场景的基准测试变得尤为重要,而现有的基准测试主要关注于理解简单的自然图像和短文本。在本文中,我们介绍了***MULTI***,作为一个前沿的基准测试,用于评测MLLMs在理解复杂的表格和图像、以及进行长文本推理的能力。**MULTI**提供多模态输入,并要求回答是精确的或开放式的,反映了现实生活中的考试风格。**MULTI**包括超过 18,000 个问题,挑战MLLMs进行多种任务,从公式推导到图像细节分析和跨模态推理。我们还引入了***MULTI-Elite***,一个精心挑选的包含500个问题的难题子集,以及***MULTI-Extend***,包含超过 4,500 个外部知识上下文。我们的评测显示了MLLMs进步的巨大潜力,GPT-4V在**MULTI**上的准确率达到了 **63.7%**,而其他MLLMs的得分介于 **28.5%** 和 **55.3%** 之间。**MULTI**不仅作为一个稳健的评测平台,也为专家级AI的发展指明了道路。
24
 
25
  ## ⏬ 下载
26
 
@@ -71,7 +73,7 @@ pip install tiktoken tqdm
71
 
72
  请参考这些示例以便快速开始:
73
 
74
- 在MULTI上测试GPT-4V模型,采用多模态输入,并使用MULTI-Extend作为外部知识:
75
 
76
  ```shell
77
  python eval.py \
@@ -89,9 +91,9 @@ python eval.py \
89
 
90
  ```shell
91
  python eval.py \
92
- --problem_file ../data/problem_v1.2.2_20240212_release.json \
93
- --subset ../data/hard_list_v1.2.1_20240206.json \
94
- --caption_file ../data/captions_v1.2.0_20231217.csv \
95
  --questions_type 0,1 \
96
  --image_type 1,2 \
97
  --input_type 1 \
 
4
 
5
  ![MULTI](./docs/static/images/overview.png)
6
 
7
+ 🌐 [网站](https://OpenDFM.github.io/MULTI-Benchmark/) | 📃 [论文](https://arxiv.org/abs/2402.03173/) | 🤗 [数据](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) | 🏆 [榜单](https://opendfm.github.io/MULTI-Benchmark/#leaderboard) | 📮 [提交](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)
8
 
9
  简体中文 | [English](./README.md)
10
 
 
12
 
13
  ## 🔥 新闻
14
 
15
+ - **[2025.1.7]** 我们更新了最新的[榜单](https://opendfm.github.io/MULTI-Benchmark/#leaderboard)。
16
+ - **[2025.1.2]** 我们更新了MULTI到v1.3.1。
17
  - **[2024.3.4]** 我们发布了[评测页面](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)。
18
  - **[2024.2.19]** 我们发布了[HuggingFace页面](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/)。
19
  - **[2024.2.6]** 我们在arXiv上发布了我们的[论文](https://arxiv.org/abs/2402.03173/)。
 
22
 
23
  ## 📖 介绍
24
 
25
+ 在多模态大型语言模型(MLLMs)快速发展的背景下,如何与人类表现进行比较成为一个重要问题。现有的数据集通常涉及合成的数据或过于简单的任务,而一些模型已经超越了人类专家的基准。本文介绍了**MULTI**,一个源自真实考试问题的中文多模态数据集。**MULTI**包含超过18,000个精心挑选和优化的问题,评估模型在中国现实考试标准下的表现,涵盖了图像-文本理解、复杂推理和知识回忆等方面。此外,我们还引入了**MULTI-Elite**,一个由500个难题组成的精选子集,以及**MULTI-Extend**,一个包含超过4,500个外部知识上下文的数据集,用于测试模型的上下文学习能力。**MULTI**不仅作为一个稳健的评测平台,也为专家级AI的发展指明了道路。
26
 
27
  ## ⏬ 下载
28
 
 
73
 
74
  请参考这些示例以便快速开始:
75
 
76
+ 在MULTI上测试GPT-4o模型,采用多模态输入,并使用MULTI-Extend作为外部知识:
77
 
78
  ```shell
79
  python eval.py \
 
91
 
92
  ```shell
93
  python eval.py \
94
+ --problem_file ../data/problem_{version}.json \
95
+ --subset ../data/hard_list_{version}.json \
96
+ --caption_file ../data/captions_{version}.csv \
97
  --questions_type 0,1 \
98
  --image_type 1,2 \
99
  --input_type 1 \