Zhaorun commited on
Commit
abde234
Β·
verified Β·
1 Parent(s): 51f3dd0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -2
README.md CHANGED
@@ -7,7 +7,35 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- # MJ-Bench Team: Align
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
  ## 😎 [**MJ-Video**: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation](https://aiming-lab.github.io/MJ-VIDEO.github.io/)
13
 
@@ -39,7 +67,7 @@ We evaluate a wide range of multimodal judges, including:
39
  - 4 closed-source VLMs (e.g., GPT-4, Claude 3)
40
 
41
  <p align="center">
42
- <img src="https://github.com/MJ-Bench/MJ-Bench.github.io/blob/main/static/images/dataset_overview_new.png" alt="MJ-Bench Dataset Overview" width="80%"/>
43
  </p>
44
 
45
  πŸ”₯ **We are actively updating the [leaderboard](https://mj-bench.github.io/)!**
 
7
  pinned: false
8
  ---
9
 
10
+ # MJ-Bench Team
11
+
12
+ [MJ-Bench-Team](https://mj-bench.github.io/) is co-founded by Stanford University, UNC-Chapel Hill, and the University of Chicago. We aim to align modern foundation models with multimodal judges to enhance reliability, safety, and performance.
13
+
14
+ <p align="center">
15
+ <img
16
+ src="https://raw.githubusercontent.com/MJ-Bench/MJ-Bench.github.io/main/static/images/Stanford.jpg"
17
+ alt="Stanford University"
18
+ width="350"
19
+ style="display:inline-block; margin:0 40px; vertical-align:middle;"/>
20
+ <img
21
+ src="https://raw.githubusercontent.com/MJ-Bench/MJ-Bench.github.io/main/static/images/UNC-logo.png"
22
+ alt="UNC Chapel Hill"
23
+ width="190"
24
+ style="display:inline-block; margin:0 -10px; vertical-align:middle;"/>
25
+ <img
26
+ src="https://raw.githubusercontent.com/MJ-Bench/MJ-Bench.github.io/main/static/images/UChicago-logo.jpg"
27
+ alt="University of Chicago"
28
+ width="160"
29
+ style="display:inline-block; margin:0 140px; vertical-align:middle;"/>
30
+ </p>
31
+
32
+ ---
33
+
34
+ ## Recent News
35
+ - πŸŽ‰ **MJ-PreferGen** is **accepted by ICLR25**! Check out the paper: [*MJ-PreferGen: An Automatic Framework for Preference Data Synthesis*](https://openreview.net/forum?id=WpZyPk79Fu)
36
+ - πŸ”₯ We have released [**MJ-Video**](https://aiming-lab.github.io/MJ-VIDEO.github.io/). All datasets and model checkpoints are available [here](https://huggingface.co/MJ-Bench)!
37
+
38
+ ---
39
 
40
  ## 😎 [**MJ-Video**: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation](https://aiming-lab.github.io/MJ-VIDEO.github.io/)
41
 
 
67
  - 4 closed-source VLMs (e.g., GPT-4, Claude 3)
68
 
69
  <p align="center">
70
+ <img src="https://raw.githubusercontent.com/MJ-Bench/MJ-Bench/main/assets/overview_new.png" alt="MJ-Bench Dataset Overview" width="80%"/>
71
  </p>
72
 
73
  πŸ”₯ **We are actively updating the [leaderboard](https://mj-bench.github.io/)!**