Safetensors
llava
multimodal
txiong23 commited on
Commit
498f2d7
1 Parent(s): 35dc2c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ tags:
15
  `llava-critic-7b` is the first open-source large multimodal model (LMM) designed as a generalist evaluator for assessing model performance across diverse multimodal scenarios. Built on the foundation of `llava-onevision-7b-ov`, it has been finetuned on [LLaVA-Critic-113k](https://huggingface.co/datasets/lmms-lab/llava-critic-113k) dataset to develop its "critic" capacities.
16
 
17
  LLaVA-Critic excels in two primary scenarios:
18
- - 1️⃣ LMM-as-a-Judge: It delivers judgement closely aligned with human, and provides concrete, image-grounded reasons. An open-source alternative to GPT for evaluations.
19
  - 2️⃣ Preference Learning: Reliable reward signals power up visual chat, leading to LLaVA-OV-Chat [7B](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov-chat)/[72B](https://huggingface.co/lmms-lab/llava-onevision-qwen2-72b-ov-chat).
20
 
21
  For further details, please refer to the following resources:
 
15
  `llava-critic-7b` is the first open-source large multimodal model (LMM) designed as a generalist evaluator for assessing model performance across diverse multimodal scenarios. Built on the foundation of `llava-onevision-7b-ov`, it has been finetuned on [LLaVA-Critic-113k](https://huggingface.co/datasets/lmms-lab/llava-critic-113k) dataset to develop its "critic" capacities.
16
 
17
  LLaVA-Critic excels in two primary scenarios:
18
+ - 1️⃣ LMM-as-a-Judge: It delivers judgments closely aligned with human, and provides concrete, image-grounded reasons. An open-source alternative to GPT for evaluations.
19
  - 2️⃣ Preference Learning: Reliable reward signals power up visual chat, leading to LLaVA-OV-Chat [7B](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov-chat)/[72B](https://huggingface.co/lmms-lab/llava-onevision-qwen2-72b-ov-chat).
20
 
21
  For further details, please refer to the following resources: