metadata
datasets:
- liuhaotian/LLaVA-Pretrain
pipeline_tag: visual-question-answering
Model
llava-v1.5-7b-xtuner-pretrain is a LLaVA projector pretrained from Vicuna-7B-v1.5 and CLIP-ViT-Large-patch14-336 on LLaVA-Pretrain dataset by XTuner.
The fine-tuned LLaVA model can be found on xtuner/llava-v1.5-7b-xtuner.
Citation
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}