tinyllava commited on
Commit
8564ba5
1 Parent(s): 3ace009

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -3,8 +3,11 @@ license: apache-2.0
3
  pipeline_tag: image-text-to-text
4
  ---
5
 
6
- ### TinyLLaVA
7
- We introduce TinyLLaVA-Phi-2-SigLIP-3.1B, a small-scale large multimodal model(LMM), which is trained by the TinyLLaVA Factory codebase. For the LLM and vision tower, we choose [Phi-2](microsoft/phi-2) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384), respectively. The dataset used for training this model is the [ShareGPT4V](https://github.com/InternLM/InternLM-XComposer/blob/main/projects/ShareGPT4V/docs/Data.md) dataset.
 
 
 
8
 
9
  ### Usage
10
  Execute the following test code:
@@ -28,7 +31,7 @@ print('runing time:', genertaion_time)
28
  | model_name | vqav2 | gqa | sqa | textvqa | MM-VET | POPE | MME | MMMU |
29
  | :----------------------------------------------------------: | ----- | ------- | ----- | ----- | ------- | ----- | ------ | ------ |
30
  | [LLaVA-1.5-7B](https://huggingface.co/llava-hf/llava-1.5-7b-hf) | 78.5 | 62.0 | 66.8 | 58.2 | 30.5 | 85.9 | 1510.7 | - |
31
- | [bczhou/TinyLLaVA-3.1B](https://huggingface.co/bczhou/TinyLLaVA-3.1B) | 79.9 | 62.0 | 69.1 | 59.1 | 32.0 | 86.4 | 1464.9 | - |
32
  | [tinyllava/TinyLLaVA-Gemma-SigLIP-2.4B](https://huggingface.co/tinyllava/TinyLLaVA-Gemma-SigLIP-2.4B) | 78.4 | 61.6 | 64.4 | 53.6 | 26.9 | 86.4 | 1339.0 | 31.7 |
33
  | [tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B](https://huggingface.co/tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B) | 80.1 | 62.1 | 73.0 | 60.3 | 37.5 | 87.2 | 1466.4 | 38.4 |
34
 
 
3
  pipeline_tag: image-text-to-text
4
  ---
5
 
6
+ ### <span style="font-size:2em;">TinyLLaVA</span>
7
+ [![hf_space](https://img.shields.io/badge/🤗-%20Open%20In%20HF-blue.svg)](https://huggingface.co/tinyllava) [![arXiv](https://img.shields.io/badge/Arxiv-2402.14289-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2402.14289) [![Github](https://img.shields.io/badge/Github-Github-orange.svg)](https://github.com/TinyLLaVA/TinyLLaVA_Factory) [![Doc](https://img.shields.io/badge/Doc-Document-logo=read%20the%20docs&logoColor=white&label=Doc)](https://tinyllava-factory.readthedocs.io/en/latest/) [![Demo](https://img.shields.io/badge/Demo-Demo-red.svg)](http://8843843nmph5.vicp.fun/#/)
8
+ TinyLLaVA has released a family of small-scale Large Multimodel Models(LMMs), ranging from 1.4B to 3.1B. Our best model, TinyLLaVA-Phi-2-SigLIP-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL.
9
+
10
+ Here, we introduce TinyLLaVA-Phi-2-SigLIP-3.1B, which is trained by the TinyLLaVA Factory codebase. For LLM and vision tower, we choose [Phi-2](microsoft/phi-2) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384), respectively. The dataset used for training this model is the [ShareGPT4V](https://github.com/InternLM/InternLM-XComposer/blob/main/projects/ShareGPT4V/docs/Data.md) dataset.
11
 
12
  ### Usage
13
  Execute the following test code:
 
31
  | model_name | vqav2 | gqa | sqa | textvqa | MM-VET | POPE | MME | MMMU |
32
  | :----------------------------------------------------------: | ----- | ------- | ----- | ----- | ------- | ----- | ------ | ------ |
33
  | [LLaVA-1.5-7B](https://huggingface.co/llava-hf/llava-1.5-7b-hf) | 78.5 | 62.0 | 66.8 | 58.2 | 30.5 | 85.9 | 1510.7 | - |
34
+ | [bczhou/TinyLLaVA-3.1B](https://huggingface.co/bczhou/TinyLLaVA-3.1B) (legacy model) | 79.9 | 62.0 | 69.1 | 59.1 | 32.0 | 86.4 | 1464.9 | - |
35
  | [tinyllava/TinyLLaVA-Gemma-SigLIP-2.4B](https://huggingface.co/tinyllava/TinyLLaVA-Gemma-SigLIP-2.4B) | 78.4 | 61.6 | 64.4 | 53.6 | 26.9 | 86.4 | 1339.0 | 31.7 |
36
  | [tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B](https://huggingface.co/tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B) | 80.1 | 62.1 | 73.0 | 60.3 | 37.5 | 87.2 | 1466.4 | 38.4 |
37