Zhang199 commited on
Commit
7799ed6
1 Parent(s): 25e5e53

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -0
README.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #TinyLLaVA
2
+
3
+ Here, we introduce TinyLLaVA-Qwen2-0.5B-siglip-so400m-patch14-384-base, which is trained by the TinyLLaVA Factory codebase. For LLM and vision tower, we choose Qwen2-0.5B and siglip-so400m-patch14-384, respectively. The dataset used for training this model is the LLaVA dataset.
4
+
5
+ ##Usage
6
+
7
+ Execute the following test code:
8
+ ```python
9
+ from transformers import AutoTokenizer, AutoModelForCausalLM
10
+
11
+ hf_path = 'Zhang199/TinyLLaVA-Qwen2-0.5B-siglip-so400m-patch14-384-base'
12
+ model = AutoModelForCausalLM.from_pretrained(hf_path, trust_remote_code=True)
13
+ model.cuda()
14
+ config = model.config
15
+ tokenizer = AutoTokenizer.from_pretrained(hf_path, use_fast=False, model_max_length = config.tokenizer_model_max_length,padding_side = config.tokenizer_padding_side)
16
+ prompt="What are these?"
17
+ image_url="http://images.cocodataset.org/test-stuff2017/000000000001.jpg"
18
+ output_text, genertaion_time = model.chat(prompt=prompt, image=image_url, tokenizer=tokenizer)
19
+
20
+ print('model output:', output_text)
21
+ print('runing time:', genertaion_time)
22
+
23
+ ##Result