|
--- |
|
datasets: |
|
- liuhaotian/LLaVA-Pretrain |
|
pipeline_tag: visual-question-answering |
|
--- |
|
|
|
<div align="center"> |
|
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/> |
|
|
|
|
|
[![Generic badge](https://img.shields.io/badge/GitHub-%20XTuner-black.svg)](https://github.com/InternLM/xtuner) |
|
|
|
|
|
</div> |
|
|
|
## Model |
|
|
|
llava-v1.5-7b-xtuner-pretrain is a LLaVA projector pretrained from [Vicuna-7B-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) on [LLaVA-Pretrain](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) dataset by [XTuner](https://github.com/InternLM/xtuner). |
|
|
|
The fine-tuned LLaVA model can be found on [xtuner/llava-v1.5-7b-xtuner](https://huggingface.co/xtuner/llava-v1.5-7b-xtuner). |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@misc{2023xtuner, |
|
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM}, |
|
author={XTuner Contributors}, |
|
howpublished = {\url{https://github.com/InternLM/xtuner}}, |
|
year={2023} |
|
} |
|
``` |
|
|