--- datasets: - liuhaotian/LLaVA-Pretrain pipeline_tag: visual-question-answering ---
[![Generic badge](https://img.shields.io/badge/GitHub-%20XTuner-black.svg)](https://github.com/InternLM/xtuner)
## Model llava-internlm2-7b-pretrain is a LLaVA projector pretrained with [InternLM2-Chat-7B](https://huggingface.co/internlm/internlm2-chat-7b) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) dataset by [XTuner](https://github.com/InternLM/xtuner). The fine-tuned LLaVA model can be found on [xtuner/llava-internlm2-7b](https://huggingface.co/xtuner/llava-internlm2-7b). ## Citation ```bibtex @misc{2023xtuner, title={XTuner: A Toolkit for Efficiently Fine-tuning LLM}, author={XTuner Contributors}, howpublished = {\url{https://github.com/InternLM/xtuner}}, year={2023} } ```