jiangchengchengNLP
commited on
Commit
•
9f0f115
1
Parent(s):
bd10718
Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,48 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
-
|
9 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Visual Language Model Based on Qwen and CLIP
|
2 |
+
|
3 |
+
This is a visual language multimodal model built upon the Qwen series language models and the CLIP visual encoder. It has been trained for 10 epochs on the LLaVA pre-training dataset and nearly 800K examples (150K instruction fine-tuning and 665K instruction mixed fine-tuning). However, due to its size, it can only perform simple question-answering tasks on images and currently supports only English question answering.
|
4 |
+
|
5 |
+
## Training Details
|
6 |
+
|
7 |
+
- The model utilizes the visual encoder from `openai/clip-vit-base-patch32` combined with `qwen2.5-0.5B` as the language model, using a Multi-Layer Perceptron (MLP) layer for alignment. The alignment layer was trained separately for four epochs on the pre-training dataset, but no significant loss improvement was observed after the second epoch.
|
8 |
+
- It was trained for three epochs on the 150K LLaVA instruction fine-tuning dataset, with a token length of 1024 in the first epoch and 2048 in the second and third epochs. The visual encoder was frozen during training, allowing for the training of the alignment layer and the language model.
|
9 |
+
- Finally, it underwent three epochs of training on the 665K LLaVA instruction dataset, maintaining a consistent token length of 2048 across all epochs, similar to the setup for the 150K instruction fine-tuning dataset. The visual encoder remained frozen throughout these epochs.
|
10 |
+
- Model hallucinations still exist, as such a small model finds it challenging to overfit on a large dataset. Therefore, its answer accuracy cannot be compared to that of the full LLaVA model. However, as a small visual language model trained from scratch, it demonstrates the powerful multimodal learning capability of transformers in visual language interactions. I will publish all of my training code and model files for researchers interested in visual language models.
|
11 |
+
|
12 |
+
### Training Resource Consumption
|
13 |
+
- Training consumed resources: H20*1*67h (for reference only).
|
14 |
+
|
15 |
+
### Uploading Issues
|
16 |
+
|
17 |
+
I attempted to use Hugging Face's PyTorch classes for uploading, but I found that it did not adequately record all of my weights, leading to issues during model inference. Therefore, it is recommended to load the model using PyTorch.
|
18 |
+
|
19 |
+
If you do not have an image, you can download one from the repository; it is a small bird with red and black feathers.
|
20 |
+
|
21 |
+
### Loading Instructions
|
22 |
+
|
23 |
+
Below are the steps to load the model using PyTorch:
|
24 |
+
|
25 |
+
1. Download the `qwenva.py` file and the `qwenva.pth` weights from the repository, ensuring that both the weight and model architecture files are in the same directory.
|
26 |
+
2. Import the model and processor from the `qwenva` file:
|
27 |
+
|
28 |
+
```python
|
29 |
+
from qwenva import model, processor
|
30 |
+
from PIL import Image
|
31 |
+
import torch
|
32 |
+
|
33 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
34 |
+
image = Image.open("./bird.jpeg")
|
35 |
+
input_ = processor("please identify the animal in the image", image)
|
36 |
+
input_ = {k: v.to(device) for k, v in input_.items()}
|
37 |
+
model.to(device)
|
38 |
+
image_idx = torch.tensor(input_['input_ids'].shape[1] - 1).unsqueeze(0)
|
39 |
+
generated_ids = model.generate(
|
40 |
+
**input_,
|
41 |
+
max_length=512,
|
42 |
+
)
|
43 |
+
generated_ids = generated_ids[0][input_['input_ids'].size(1):]
|
44 |
+
response = processor.tokenizer.decode(generated_ids, skip_special_tokens=True)
|
45 |
+
print(response)
|
46 |
+
|
47 |
+
```output
|
48 |
+
"The bird is perched on a branch, possibly a branch, with its head partially visible."
|