HalDet-LLaVA
HalDet-LLaVA is designed for multimodal hallucination detection, trained on the MHaluBench training dataset, achieving detection performance close to that of using GPT4-Vision.
HalDet-LLaVA is trained on the MHaluBench training set using LLaVA-v1.5, specific parameters can be found in the file finetune_task_lora.sh.
We trained HalDet-LLaVA on 1-A800 in 1 hour. If you don"t have enough GPU resources, we will soon provide model distributed training scripts.
You can inference our HalDet-LLaVA by using inference.py
To view more detailed information about HalDet-LLaVA and the train dataset, please refer to the EasyDetect and readme
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.