StarCycle commited on
Commit
7c27d8b
1 Parent(s): 0c69c7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -10,6 +10,12 @@ llava-dinov2-internlm2-7b-v1 is a LLaVA model fine-tuned from [InternLM2-Chat-7B
10
 
11
  I did not carefully tune the training hyperparameters but the model still show capability to solve some tasks. It shows that a visual encoder can be integrated with an LLM, even when the encoder is not aligned with natural language with contrastive learning like CLIP.
12
 
 
 
 
 
 
 
13
  ## Example
14
  ![5bb2f23dd595d389e6a9a0aadebd87c.png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/iOFZOwLGfEByCQ_2EkR7y.png)
15
  Explain the photo in English:
 
10
 
11
  I did not carefully tune the training hyperparameters but the model still show capability to solve some tasks. It shows that a visual encoder can be integrated with an LLM, even when the encoder is not aligned with natural language with contrastive learning like CLIP.
12
 
13
+ ## Future development of Dinov2 based LLaVA
14
+ Using Dinov2 as the vision encoder of LLaVA may have some disadvantages. Unlike CLIP, Dinov2 is not pre-aligned with language embedding space. Even if you use both CLIP and Dinov2 and mix their tokens, the benchmark perfermance is not very strong (see arxiv:2401.06209 and the following table from their paper).
15
+ ![Performance when mix Dinov2 and CLIP tokens](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/jvAI58dKtuiNyFuCrYRhO.png)
16
+
17
+ If you have any idea to improve it, please open an issue or just send an email to zhuohengli@foxmail.com. You are welcomed!
18
+
19
  ## Example
20
  ![5bb2f23dd595d389e6a9a0aadebd87c.png](https://cdn-uploads.huggingface.co/production/uploads/642a298ae5f33939cf3ee600/iOFZOwLGfEByCQ_2EkR7y.png)
21
  Explain the photo in English: