Xuehai commited on
Commit
5e51e36
β€’
1 Parent(s): c35688a

add leaderboard

Browse files
Files changed (5) hide show
  1. README.md +15 -6
  2. app.py +296 -0
  3. index.html +0 -19
  4. requirements.txt +3 -0
  5. style.css +0 -28
README.md CHANGED
@@ -1,11 +1,20 @@
1
  ---
2
- title: MMWorld
3
- emoji: πŸŒ–
4
  colorFrom: indigo
5
- colorTo: red
6
- sdk: static
 
 
7
  pinned: false
8
- license: apache-2.0
9
  ---
10
 
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
1
  ---
2
+ title: VBench Leaderboard
3
+ emoji: πŸ“Š
4
  colorFrom: indigo
5
+ colorTo: pink
6
+ sdk: gradio
7
+ sdk_version: 4.36.1
8
+ app_file: app.py
9
  pinned: false
10
+ license: mit
11
  ---
12
 
13
+ # VBench Leaderboard
14
+
15
+ ## Space Description
16
+
17
+ - **Repository:** [VBench](https://github.com/Vchitect/VBench)
18
+ - **Paper:** [2311.17982]
19
+ (arxiv.org/abs/2311.17982)
20
+ - **Point of Contact:** mailto:[Vchitect](vchitect@pjlab.org.cn)
app.py ADDED
@@ -0,0 +1,296 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ __all__ = ['block', 'make_clickable_model', 'make_clickable_user', 'get_submissions']
2
+ import gradio as gr
3
+ import pandas as pd
4
+
5
+ # Constants
6
+ # =========
7
+
8
+ # Disciplines
9
+ DISCIPLINES = [
10
+ "Art & Sports",
11
+ "Business",
12
+ "Science",
13
+ "Health & Medicine",
14
+ "Embodied Tasks",
15
+ "Tech & Engineering",
16
+ "Game"
17
+ ]
18
+
19
+ # Model Information Columns
20
+ MODEL_INFO = [
21
+ "Model Name (clickable)"
22
+ ]
23
+
24
+ # Column Names for DataFrame
25
+ COLUMN_NAMES = MODEL_INFO + DISCIPLINES + ["Average"]
26
+
27
+ # Data Types for DataFrame
28
+ DATA_TITILE_TYPE = ['markdown'] + ['number'] * len(DISCIPLINES) + ['number']
29
+
30
+ # Leaderboard Introduction
31
+ LEADERBOARD_INTRODUCTION = """# MMWorld Leaderboard
32
+
33
+ *"Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"*
34
+ πŸ† Welcome to the leaderboard of the **MMWorld**! 🎦 *A new benchmark for multi-discipline, multi-faceted multimodal video understanding*
35
+ [![GitHub](https://img.shields.io/badge/Code-GitHub-black?logo=github)](https://github.com/eric-ai-lab/MMWorld)
36
+
37
+ <div style="display: flex; flex-wrap: wrap; align-items: center; gap: 10px;">
38
+ <a href='https://arxiv.org/abs/2406.08407'>
39
+ <img src='https://img.shields.io/badge/cs.CV-Paper-b31b1b?logo=arxiv&logoColor=red'>
40
+ </a>
41
+ <a href='https://mmworld-bench.github.io/'>
42
+ <img src='https://img.shields.io/badge/MMWorld-Website-green?logo=internet-explorer&logoColor=blue'>
43
+ </a>
44
+ </div>
45
+
46
+ """
47
+
48
+ SUBMIT_INTRODUCTION = """# Submit on MMWorld Benchmark Introduction
49
+
50
+ ## 🎈
51
+ Please obtain the evaluation file `*.json` by running MMWorld in Github and upload the json file below.
52
+
53
+ ⚠️ The contact information you filled in will not be made public.
54
+ """
55
+
56
+ TABLE_INTRODUCTION = """
57
+ The MMWorld Leaderboard showcases the performance of various models across different disciplines. Select the disciplines you're interested in to see how models perform in those areas.
58
+ """
59
+
60
+ LEADERBOARD_INFO = """
61
+ Multimodal Language Language Models (MLLMs) demonstrate the emerging abilities of "world models"β€”interpreting and reasoning about complex real-world dynamics. To assess these abilities, we posit videos are the ideal medium, as they
62
+ encapsulate rich representations of real-world dynamics and causalities. To this end, we introduce MMWorld, a new benchmark for multi-discipline, multi-faceted multimodal video understanding. MMWorld distinguishes itself from previous
63
+ video understanding benchmarks with two unique advantages: (1) multi-discipline, covering various disciplines that often require domain expertise for comprehensive understanding; (2) multi-faceted reasoning, including explanation, counterfactual
64
+ thinking, future prediction, etc. MMWorld consists of a human-annotated dataset to evaluate MLLMs with questions about the whole videos and a synthetic dataset to analyze MLLMs within a single modality of perception.
65
+ """
66
+
67
+ CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
68
+ CITATION_BUTTON_TEXT = r"""@misc{he2024mmworld,
69
+ title={MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos},
70
+ author={Xuehai He and Weixi Feng and Kaizhi Zheng and Yujie Lu and Wanrong Zhu and Jiachen Li and Yue Fan and Jianfeng Wang and Linjie Li and Zhengyuan Yang and Kevin Lin and William Yang Wang and Lijuan Wang and Xin Eric Wang},
71
+ year={2024},
72
+ eprint={2406.08407},
73
+ archivePrefix={arXiv},
74
+ primaryClass={cs.CV}
75
+ }"""
76
+
77
+ # Data: Models and their scores
78
+ data = {
79
+ "Model Name (clickable)": [
80
+ "Random Choice",
81
+ "GPT-4o",
82
+ "Claude 3.5 Sonnet",
83
+ "GPT-4V",
84
+ "Gemini 1.5 Pro",
85
+ "Video-LLaVA-7B",
86
+ "Video-Chat-7B",
87
+ "ChatUnivi-7B",
88
+ "mPLUG-Owl-7B",
89
+ "VideoChatGPT-7B",
90
+ "PandaGPT-7B",
91
+ "ImageBind-LLM-7B",
92
+ "X-Instruct-BLIP-7B",
93
+ "LWM-1M-JAX",
94
+ "Otter-7B",
95
+ "Video-LLaMA-2-13B"
96
+ ],
97
+ "Art & Sports": [25.03, 47.87, 54.58, 36.17, 37.12, 35.91, 39.53, 24.47, 29.16, 26.84, 25.33, 24.82, 21.08, 12.04, 17.12, 6.15],
98
+ "Business": [25.09, 91.14, 63.87, 81.59, 76.69, 51.28, 51.05, 60.84, 64.10, 39.16, 42.66, 42.66, 15.85, 17.48, 18.65, 21.21],
99
+ "Science": [26.44, 73.78, 59.85, 66.52, 62.81, 56.30, 30.81, 52.00, 47.41, 36.45, 39.41, 32.15, 22.52, 15.41, 9.33, 22.22],
100
+ "Health & Medicine": [25.00, 83.33, 54.51, 73.61, 76.74, 32.64, 46.18, 61.11, 60.07, 53.12, 38.54, 30.21, 28.47, 20.49, 6.94, 31.25],
101
+ "Embodied Tasks": [26.48, 62.94, 30.99, 55.48, 43.59, 63.17, 40.56, 46.15, 23.78, 36.60, 35.43, 46.85, 18.41, 25.87, 13.29, 15.38],
102
+ "Tech & Engineering": [30.92, 75.53, 58.87, 61.35, 69.86, 58.16, 39.36, 56.74, 41.84, 41.49, 41.84, 41.49, 22.34, 21.99, 15.96, 19.15],
103
+ "Game": [25.23, 80.32, 59.44, 73.49, 66.27, 49.00, 44.98, 52.61, 62.25, 36.55, 40.16, 41.37, 26.10, 11.65, 15.26, 24.90]
104
+ }
105
+
106
+ # Create DataFrame
107
+ df_full = pd.DataFrame(data)
108
+
109
+ # Function to calculate average score
110
+ def calculate_average(df, disciplines):
111
+ df['Average'] = df[disciplines].mean(axis=1)
112
+ return df
113
+
114
+ # Function to get leaderboard DataFrame based on selected disciplines
115
+ def get_leaderboard_df(selected_disciplines):
116
+ if not selected_disciplines:
117
+ selected_disciplines = DISCIPLINES # If none selected, default to all
118
+ # Copy the full DataFrame
119
+ df = df_full.copy()
120
+ # Calculate the average based on selected disciplines
121
+ df['Average'] = df[selected_disciplines].mean(axis=1)
122
+ # Select columns to display
123
+ columns_to_display = MODEL_INFO + selected_disciplines + ['Average']
124
+ df = df[columns_to_display]
125
+ # Sort by Average descending
126
+ df = df.sort_values(by='Average', ascending=False)
127
+ return df
128
+
129
+ # Function to convert scores to two decimal places
130
+ def convert_scores_to_percentage(df):
131
+ for column in df.columns[1:]:
132
+ df[column] = df[column].round(2)
133
+ return df
134
+
135
+ # Gradio app
136
+ block = gr.Blocks()
137
+
138
+ with block:
139
+ gr.Markdown(
140
+ LEADERBOARD_INTRODUCTION
141
+ )
142
+ with gr.Tabs(elem_classes="tab-buttons") as tabs:
143
+ with gr.TabItem("πŸ“Š MMWorld", elem_id="vbench-tab-table", id=1):
144
+ with gr.Row():
145
+ with gr.Accordion("Citation", open=False):
146
+ citation_button = gr.Textbox(
147
+ value=CITATION_BUTTON_TEXT,
148
+ label=CITATION_BUTTON_LABEL,
149
+ elem_id="citation-button",
150
+ lines=14,
151
+ )
152
+ gr.Markdown(
153
+ TABLE_INTRODUCTION
154
+ )
155
+ with gr.Row():
156
+ with gr.Column(scale=0.2):
157
+ select_all_button = gr.Button("Select All")
158
+ deselect_all_button = gr.Button("Deselect All")
159
+
160
+ with gr.Column(scale=0.8):
161
+ # Selection for disciplines
162
+ checkbox_group = gr.CheckboxGroup(
163
+ choices=DISCIPLINES,
164
+ value=DISCIPLINES, # All disciplines selected by default
165
+ label="Evaluation discipline",
166
+ interactive=True,
167
+ )
168
+
169
+ # Initial DataFrame
170
+ initial_df = get_leaderboard_df(DISCIPLINES)
171
+ initial_df = convert_scores_to_percentage(initial_df)
172
+
173
+ data_component = gr.Dataframe(
174
+ value=initial_df,
175
+ headers=COLUMN_NAMES,
176
+ type="pandas",
177
+ datatype=DATA_TITILE_TYPE,
178
+ interactive=False,
179
+ visible=True,
180
+ height=700,
181
+ )
182
+
183
+ # Callbacks for buttons and checkbox changes
184
+ def update_table(selected_disciplines):
185
+ updated_df = get_leaderboard_df(selected_disciplines)
186
+ updated_df = convert_scores_to_percentage(updated_df)
187
+ return updated_df
188
+
189
+ select_all_button.click(
190
+ fn=lambda: gr.update(value=DISCIPLINES),
191
+ inputs=None,
192
+ outputs=checkbox_group
193
+ ).then(
194
+ fn=update_table,
195
+ inputs=checkbox_group,
196
+ outputs=data_component
197
+ )
198
+
199
+ deselect_all_button.click(
200
+ fn=lambda: gr.update(value=[]),
201
+ inputs=None,
202
+ outputs=checkbox_group
203
+ ).then(
204
+ fn=update_table,
205
+ inputs=checkbox_group,
206
+ outputs=data_component
207
+ )
208
+
209
+ checkbox_group.change(
210
+ fn=update_table,
211
+ inputs=checkbox_group,
212
+ outputs=data_component
213
+ )
214
+
215
+ # About Tab
216
+ with gr.TabItem("πŸ“ About", elem_id="mmworld-table", id=2):
217
+ gr.Markdown(LEADERBOARD_INFO, elem_classes="markdown-text")
218
+
219
+ # Submit Tab
220
+ with gr.TabItem("πŸš€ Submit here!", elem_id="mmworld-tab-table", id=3):
221
+ gr.Markdown(LEADERBOARD_INTRODUCTION, elem_classes="markdown-text")
222
+
223
+ with gr.Row():
224
+ gr.Markdown(SUBMIT_INTRODUCTION, elem_classes="markdown-text")
225
+
226
+ with gr.Row():
227
+ gr.Markdown("# βœ‰οΈβœ¨ Submit your model evaluation JSON file here!", elem_classes="markdown-text")
228
+
229
+ with gr.Row():
230
+ with gr.Column():
231
+ model_name_textbox = gr.Textbox(
232
+ label="**Model name**", placeholder="Required field"
233
+ )
234
+ revision_name_textbox = gr.Textbox(
235
+ label="Revision Model Name (Optional)", placeholder="GPT4V"
236
+ )
237
+
238
+ with gr.Column():
239
+ model_link = gr.Textbox(
240
+ label="**Project Page/Paper Link**", placeholder="Required field"
241
+ )
242
+ team_name = gr.Textbox(
243
+ label="Your Team Name (If left blank, it will be user upload)", placeholder="User Upload"
244
+ )
245
+ contact_email = gr.Textbox(
246
+ label="E-Mail (**Will not be displayed**)", placeholder="Required field"
247
+ )
248
+
249
+ with gr.Column():
250
+ input_file = gr.File(label="Click to Upload a ZIP File", file_count="single", type='binary')
251
+ submit_button = gr.Button("Submit Eval")
252
+ submit_succ_button = gr.Markdown("Submit Success! Please press refresh and return to LeaderBoard!", visible=False)
253
+ fail_textbox = gr.Markdown('<span style="color:red;">Please ensure that the `Model Name`, `Project Page`, and `Email` are filled in correctly.</span>', elem_classes="markdown-text", visible=False)
254
+
255
+ submission_result = gr.Markdown()
256
+
257
+ # Placeholder function for submission
258
+ def add_new_eval(
259
+ input_file,
260
+ model_name_textbox: str,
261
+ revision_name_textbox: str,
262
+ model_link: str,
263
+ team_name: str,
264
+ contact_email: str
265
+ ):
266
+ if input_file is None:
267
+ return gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)
268
+ if model_link == '' or model_name_textbox == '' or contact_email == '':
269
+ return gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)
270
+ # Process the uploaded file and submission details here
271
+ # For now, we just simulate a successful submission
272
+ return gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)
273
+
274
+ submit_button.click(
275
+ add_new_eval,
276
+ inputs=[
277
+ input_file,
278
+ model_name_textbox,
279
+ revision_name_textbox,
280
+ model_link,
281
+ team_name,
282
+ contact_email
283
+ ],
284
+ outputs=[submit_button, submit_succ_button, fail_textbox]
285
+ )
286
+
287
+ def refresh_data():
288
+ value1 = get_leaderboard_df(DISCIPLINES)
289
+ value1 = convert_scores_to_percentage(value1)
290
+ return value1
291
+
292
+ with gr.Row():
293
+ data_run = gr.Button("Refresh")
294
+ data_run.click(fn=refresh_data, inputs=None, outputs=data_component)
295
+
296
+ block.launch()
index.html DELETED
@@ -1,19 +0,0 @@
1
- <!doctype html>
2
- <html>
3
- <head>
4
- <meta charset="utf-8" />
5
- <meta name="viewport" content="width=device-width" />
6
- <title>My static Space</title>
7
- <link rel="stylesheet" href="style.css" />
8
- </head>
9
- <body>
10
- <div class="card">
11
- <h1>Welcome to your static Space!</h1>
12
- <p>You can modify this app directly by editing <i>index.html</i> in the Files and versions tab.</p>
13
- <p>
14
- Also don't forget to check the
15
- <a href="https://huggingface.co/docs/hub/spaces" target="_blank">Spaces documentation</a>.
16
- </p>
17
- </div>
18
- </body>
19
- </html>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ gradio==3.23.0
2
+ numpy
3
+ pandas
style.css DELETED
@@ -1,28 +0,0 @@
1
- body {
2
- padding: 2rem;
3
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
4
- }
5
-
6
- h1 {
7
- font-size: 16px;
8
- margin-top: 0;
9
- }
10
-
11
- p {
12
- color: rgb(107, 114, 128);
13
- font-size: 15px;
14
- margin-bottom: 10px;
15
- margin-top: 5px;
16
- }
17
-
18
- .card {
19
- max-width: 620px;
20
- margin: 0 auto;
21
- padding: 16px;
22
- border: 1px solid lightgray;
23
- border-radius: 16px;
24
- }
25
-
26
- .card p:last-child {
27
- margin-bottom: 0;
28
- }