Why is GPT4V exceeding high on LLaVA bench?

#1
by Yhyu13 - opened

Hi,

https://cdn-uploads.huggingface.co/production/uploads/6566e0c493e30c8a60048eb3/ypXZxb4HE-jDPJU9115bi.png
from this pic in your paper, GPT4V is 90+ score on LLaVA bench which extradinarily greater than another models?

What could be potential reason for such anomoly?

Thanks!

OpenBMB org

Hi Yhyu13, thank you for your interest and such a good question! I guess the potential reason can be fourfold:

  1. GPT-4V outputs are generally much longer than outputs from other models. Specifically, the average response length on the LLaVA Bench of GPT-4V, MiniGemini 34B, and RLAIF-V-7B are 181, 124, and 110 words.
  2. GPT-4V, inheriting the strong text generation capability of GPT-4, can generate more well-organized text compared with other models.
  3. GPT-4 prefers its own text style, thus resulting higher evaluation score.
  4. GPT-4 prefers long answers, maybe partially caused by the above reason.

Sign up or log in to comment