---
license: apache-2.0
---
LOVA3: Learning to Visual Question Answering, Asking and Assessment
Henry Hengyuan Zhao
ยท
Pan Zhou
ยท
Difei Gao
ยท
Mike Zheng Shou
Show Lab, National University of Singapore | Singapore Management University
## Abstract
Question answering, asking, and assessment are three innate human traits crucial for understanding the world and acquiring knowledge. By enhancing these capabilities, humans can more effectively utilize data, leading to better comprehension and learning outcomes. However, current Multimodal Large Language Models (MLLMs) primarily focus on question answering, often neglecting the full potential of questioning and assessment skills. In this study, we introduce LOVA3, an innovative framework named "Learning tO Visual Question Answering, Asking and Assessment," designed to equip MLLMs with these additional capabilities.
## ๐ข News
* [10/16/2024] ๐ฅ We release the [webpage](https://zhaohengyuan1.github.io/lova3.github.io/).
* [09/26/2024] ๐ฅ LOVA3 is accepted by NeurIPS 2024.
* [07/01/2024] ๐ฅ Related work [Genixer](https://github.com/zhaohengyuan1/Genixer) is accepted by ECCV 2024.
* [05/24/2024] ๐ฅ We release the code of LOVA3, the [EvalQABench](https://huggingface.co/datasets/hhenryz/EvalQABench), the training dataset [Mixed_VQA_GenQA_EvalQA_1.5M.jsonl](https://huggingface.co/datasets/hhenryz/Mixed_VQA_GenQA_EvalQA_1.5M), and the checkpoint [LOVA3-llava-v1.5-7b](https://huggingface.co/hhenryz/LOVA3-llava-v1.5-7b).
* [05/23/2024] ๐ฅ We release the LOVA3 [paper](https://arxiv.org/abs/2405.14974).
## ๐กKey Contributions:
* **LOVA3** - To the best of our knowledge, LOVA3 is the first effort to imbue the asking and assessment abilities in training a robust and intelligent MLLM, inspired from human learning mechanism.
* **EvalQABench** - We build a new benchmark EvalQABench for the VQA correction evaluation as the first effort to advance the development of future research.
* **Performance Improvement** - Training with our proposed LOVA3 framework, we observe consistent improvement on 10 representative benchmarks.
## Model weight
Pretrained weight: [LOVA3-llava-v1.5-7b](https://huggingface.co/hhenryz/LOVA3-llava-v1.5-7b)
Download it by using following command:
```
git clone https://huggingface.co/hhenryz/LOVA3-llava-v1.5-7b
```
## ๐ Citation
If you find LOVA3 useful, please cite using this BibTeX:
```bibtex
@misc{zhao2024lova3learningvisualquestion,
title={LOVA3: Learning to Visual Question Answering, Asking and Assessment},
author={Henry Hengyuan Zhao and Pan Zhou and Difei Gao and Mike Zheng Shou},
year={2024},
eprint={2405.14974},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2405.14974},
}
```