File size: 1,380 Bytes
4af18b8
 
 
878ede4
 
 
 
 
 
 
 
 
 
a3bca15
 
 
 
 
 
 
878ede4
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
license: apache-2.0
---

<p align="center">
<img src="https://raw.githubusercontent.com/mu-cai/ViP-LLaVA/main/images/vip-llava_arch.png" width="600"> <br>
</p>


# [ViP-Bench: Making Large Multimodal Models Understand Arbitrary Visual Prompts](https://vip-llava.github.io/)

ViP-Bench a region level multimodal model evaulation benchmark curated by University of Wisconsin-Madison. We provides two kinds of visual prompts: (1) bounding boxes, and (2) human drawn diverse visual prompts. 

**Evaluation Code** See [https://github.com/mu-cai/ViP-LLaVA/blob/main/docs/Evaluation.md](https://github.com/mu-cai/ViP-LLaVA/blob/main/docs/Evaluation.md)

**LeaderBoard** See [https://paperswithcode.com/sota/visual-question-answering-on-vip-bench](https://paperswithcode.com/sota/visual-question-answering-on-vip-bench)


**Evaluation Server** Please refer to [https://huggingface.co/spaces/mucai/ViP-Bench_Evaluator](https://huggingface.co/spaces/mucai/ViP-Bench_Evaluator) to use our evaluation server. 



## Source annotation

In `source_image`, we provide the source plain images along with the bounding box/mask annotations. Researchers can use such grounding information to match the special tokens such as `<obj>` in `"question"` entry of `vip-bench-meta-data.json`. For example, `<obj>` can be replaced by textual coordinates to evaluate the region-level multimodal models.