Spaces:
Running
Running
Update paper link
Browse files- src/about.py +1 -1
src/about.py
CHANGED
@@ -29,7 +29,7 @@ MJB_LOGO = '<img src="" alt="Logo" style="width: 30%; display: block; margin: au
|
|
29 |
INTRODUCTION_TEXT = """
|
30 |
# Multimodal Judge Benchmark (MJ-Bench): Is Your Multimodal Reward Model Really a Good Judge?
|
31 |
### Evaluating the `Alignment`, `Quality`, `Safety`, and `Bias` of multimodal reward models
|
32 |
-
[Website](https://mj-bench.github.io) | [Code](https://github.com/MJ-Bench/MJ-Bench) | [Eval. Dataset](https://huggingface.co/datasets/MJ-Bench/MJ-Bench) | [Results](https://huggingface.co/datasets/MJ-Bench/MJ-Bench-Results) | [Refined Model via RMs](https://huggingface.co/collections/MJ-Bench/aligned-diffusion-model-via-dpo-667f8b71f35c3ff47acafd43) | [Paper](https://arxiv.org) | Total models: {}
|
33 |
"""
|
34 |
|
35 |
# Which evaluations are you running? how can people reproduce what you have?
|
|
|
29 |
INTRODUCTION_TEXT = """
|
30 |
# Multimodal Judge Benchmark (MJ-Bench): Is Your Multimodal Reward Model Really a Good Judge?
|
31 |
### Evaluating the `Alignment`, `Quality`, `Safety`, and `Bias` of multimodal reward models
|
32 |
+
[Website](https://mj-bench.github.io) | [Code](https://github.com/MJ-Bench/MJ-Bench) | [Eval. Dataset](https://huggingface.co/datasets/MJ-Bench/MJ-Bench) | [Results](https://huggingface.co/datasets/MJ-Bench/MJ-Bench-Results) | [Refined Model via RMs](https://huggingface.co/collections/MJ-Bench/aligned-diffusion-model-via-dpo-667f8b71f35c3ff47acafd43) | [Paper](https://arxiv.org/abs/2407.04842) | Total models: {}
|
33 |
"""
|
34 |
|
35 |
# Which evaluations are you running? how can people reproduce what you have?
|