image
imagewidth (px)
48
5.13k
label
class label
2 classes
0assets
0assets
0assets
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images
1images

AlignMMBench: Evaluating Chinese Multimodal Alignment in Large Vision-Language Models


πŸ”₯ News

  • 2024.06.14 🌟 We released AlignMMBench, a comprehensive alignment benchmark for vision language models!

πŸ‘€ Introduce to AlignMMBench

AlignMMBench a multimodal alignment benchmark that encompasses both single-turn and multi-turn dialogue scenarios. It includes three categories and thirteen capability tasks, with a total of 4,978 question-answer pairs.

Features

  1. High-Quality Annotations: Reliable benchmark with meticulous human annotation and multi-stage quality control processes.

  2. Self Critic: To improve the controllability of alignment evaluation, we introduce the CritiqueVLM, a ChatGLM3-6B based evaluator that has been rule-calibrated and carefully finetuned. With human judgements, its evaluation consistency surpasses that of GPT-4.

  3. Diverse Data: Three categories and thirteen capability tasks, including both single-turn and multi-turn dialogue scenarios.

πŸ“ˆ Results

License

The use of the dataset and the original videos is governed by the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license, as detailed in the LICENSE.

If you believe that any content in this dataset infringes on your rights, please contact us at wenmeng.yu@aminer.cn to request its removal.

Citation

If you find our work helpful for your research, please consider citing our work.

@misc{wu2024alignmmbench,
      title={AlignMMBench: Evaluating Chinese Multimodal Alignment in Large Vision-Language Models}, 
      author={Yuhang Wu and Wenmeng Yu and Yean Cheng and Yan Wang and Xiaohan Zhang and Jiazheng Xu and Ming Ding and Yuxiao Dong},
      year={2024},
      eprint={2406.09295},
      archivePrefix={arXiv}
}
Downloads last month
0
Edit dataset card