小臣子吃大橙子
commited on
Commit
·
cd1bc51
1
Parent(s):
f315363
update README
Browse files- README.md +9 -2
- README_zh.md +8 -6
README.md
CHANGED
@@ -12,7 +12,8 @@ viewer: False
|
|
12 |
|
13 |
![MULTI](./docs/static/images/overview.png)
|
14 |
|
15 |
-
🌐 [Website](https://OpenDFM.github.io/MULTI-Benchmark/) | 📃 [Paper](https://arxiv.org/abs/2402.03173/) | 🤗 [Dataset](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) |
|
|
|
16 |
|
17 |
[简体中文](./README_zh.md) | English
|
18 |
|
@@ -20,6 +21,8 @@ viewer: False
|
|
20 |
|
21 |
## 🔥 News
|
22 |
|
|
|
|
|
23 |
- **[2024.3.4]** We have released the [evaluation page](https://OpenDFM.github.io/MULTI-Benchmark/static/pages/submit.html).
|
24 |
- **[2024.2.19]** We have released the [HuggingFace Page](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/).
|
25 |
- **[2024.2.6]** We have published our [paper](https://arxiv.org/abs/2402.03173/) on arXiv.
|
@@ -28,7 +31,11 @@ viewer: False
|
|
28 |
|
29 |
## 📖 Overview
|
30 |
|
31 |
-
|
|
|
|
|
|
|
|
|
32 |
|
33 |
|
34 |
## ⏬ Download
|
|
|
12 |
|
13 |
![MULTI](./docs/static/images/overview.png)
|
14 |
|
15 |
+
🌐 [Website](https://OpenDFM.github.io/MULTI-Benchmark/) | 📃 [Paper](https://arxiv.org/abs/2402.03173/) | 🤗 [Dataset](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) |
|
16 |
+
🏆 [Leaderboard](https://opendfm.github.io/MULTI-Benchmark/#leaderboard) | 📮 [Submit](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)
|
17 |
|
18 |
[简体中文](./README_zh.md) | English
|
19 |
|
|
|
21 |
|
22 |
## 🔥 News
|
23 |
|
24 |
+
- **[2025.1.7]** We have updated our [leaderboard](https://opendfm.github.io/MULTI-Benchmark/#leaderboard) with the latest results.
|
25 |
+
- **[2025.1.2]** We have updated MULTI to v1.3.1.
|
26 |
- **[2024.3.4]** We have released the [evaluation page](https://OpenDFM.github.io/MULTI-Benchmark/static/pages/submit.html).
|
27 |
- **[2024.2.19]** We have released the [HuggingFace Page](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/).
|
28 |
- **[2024.2.6]** We have published our [paper](https://arxiv.org/abs/2402.03173/) on arXiv.
|
|
|
31 |
|
32 |
## 📖 Overview
|
33 |
|
34 |
+
The rapid development of multimodal large language models (MLLMs) raises the question of how they compare to human performance. While existing datasets often feature synthetic or
|
35 |
+
overly simplistic tasks, some models have already surpassed human expert baselines. In this paper, we present **MULTI**, a Chinese multimodal dataset derived from authentic examination
|
36 |
+
questions. Comprising over 18,000 carefully selected and refined questions, **MULTI** evaluates models using real-world examination standards, encompassing image-text comprehension,
|
37 |
+
complex reasoning, and knowledge recall. Additionally, We also introduce **MULTI-Elite**, a 500-question selected hard subset, and **MULTI-Extend** with more than 4,500 external knowledge
|
38 |
+
context pieces for testing in-context learning capabilities. **MULTI** serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.
|
39 |
|
40 |
|
41 |
## ⏬ Download
|
README_zh.md
CHANGED
@@ -4,7 +4,7 @@
|
|
4 |
|
5 |
![MULTI](./docs/static/images/overview.png)
|
6 |
|
7 |
-
🌐 [网站](https://OpenDFM.github.io/MULTI-Benchmark/) | 📃 [论文](https://arxiv.org/abs/2402.03173/) | 🤗 [数据](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) | 📮 [提交](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)
|
8 |
|
9 |
简体中文 | [English](./README.md)
|
10 |
|
@@ -12,6 +12,8 @@
|
|
12 |
|
13 |
## 🔥 新闻
|
14 |
|
|
|
|
|
15 |
- **[2024.3.4]** 我们发布了[评测页面](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)。
|
16 |
- **[2024.2.19]** 我们发布了[HuggingFace页面](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/)。
|
17 |
- **[2024.2.6]** 我们在arXiv上发布了我们的[论文](https://arxiv.org/abs/2402.03173/)。
|
@@ -20,7 +22,7 @@
|
|
20 |
|
21 |
## 📖 介绍
|
22 |
|
23 |
-
在多模态大型语言模型(MLLMs
|
24 |
|
25 |
## ⏬ 下载
|
26 |
|
@@ -71,7 +73,7 @@ pip install tiktoken tqdm
|
|
71 |
|
72 |
请参考这些示例以便快速开始:
|
73 |
|
74 |
-
在MULTI上测试GPT-
|
75 |
|
76 |
```shell
|
77 |
python eval.py \
|
@@ -89,9 +91,9 @@ python eval.py \
|
|
89 |
|
90 |
```shell
|
91 |
python eval.py \
|
92 |
-
--problem_file ../data/
|
93 |
-
--subset ../data/
|
94 |
-
--caption_file ../data/
|
95 |
--questions_type 0,1 \
|
96 |
--image_type 1,2 \
|
97 |
--input_type 1 \
|
|
|
4 |
|
5 |
![MULTI](./docs/static/images/overview.png)
|
6 |
|
7 |
+
🌐 [网站](https://OpenDFM.github.io/MULTI-Benchmark/) | 📃 [论文](https://arxiv.org/abs/2402.03173/) | 🤗 [数据](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) | 🏆 [榜单](https://opendfm.github.io/MULTI-Benchmark/#leaderboard) | 📮 [提交](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)
|
8 |
|
9 |
简体中文 | [English](./README.md)
|
10 |
|
|
|
12 |
|
13 |
## 🔥 新闻
|
14 |
|
15 |
+
- **[2025.1.7]** 我们更新了最新的[榜单](https://opendfm.github.io/MULTI-Benchmark/#leaderboard)。
|
16 |
+
- **[2025.1.2]** 我们更新了MULTI到v1.3.1。
|
17 |
- **[2024.3.4]** 我们发布了[评测页面](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html)。
|
18 |
- **[2024.2.19]** 我们发布了[HuggingFace页面](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/)。
|
19 |
- **[2024.2.6]** 我们在arXiv上发布了我们的[论文](https://arxiv.org/abs/2402.03173/)。
|
|
|
22 |
|
23 |
## 📖 介绍
|
24 |
|
25 |
+
在多模态大型语言模型(MLLMs)快速发展的背景下,如何与人类表现进行比较成为一个重要问题。现有的数据集通常涉及合成的数据或过于简单的任务,而一些模型已经超越了人类专家的基准。本文介绍了**MULTI**,一个源自真实考试问题的中文多模态数据集。**MULTI**包含超过18,000个精心挑选和优化的问题,评估模型在中国现实考试标准下的表现,涵盖了图像-文本理解、复杂推理和知识回忆等方面。此外,我们还引入了**MULTI-Elite**,一个由500个难题组成的精选子集,以及**MULTI-Extend**,一个包含超过4,500个外部知识上下文的数据集,用于测试模型的上下文学习能力。**MULTI**不仅作为一个稳健的评测平台,也为专家级AI的发展指明了道路。
|
26 |
|
27 |
## ⏬ 下载
|
28 |
|
|
|
73 |
|
74 |
请参考这些示例以便快速开始:
|
75 |
|
76 |
+
在MULTI上测试GPT-4o模型,采用多模态输入,并使用MULTI-Extend作为外部知识:
|
77 |
|
78 |
```shell
|
79 |
python eval.py \
|
|
|
91 |
|
92 |
```shell
|
93 |
python eval.py \
|
94 |
+
--problem_file ../data/problem_{version}.json \
|
95 |
+
--subset ../data/hard_list_{version}.json \
|
96 |
+
--caption_file ../data/captions_{version}.csv \
|
97 |
--questions_type 0,1 \
|
98 |
--image_type 1,2 \
|
99 |
--input_type 1 \
|