Update README.md
Browse files
README.md
CHANGED
@@ -37,6 +37,8 @@ Thanks to its lightweight design, it can be deployed on edge devices
|
|
37 |
|
38 |
* Vision Encoder: google/siglip-so400m-patch14-384
|
39 |
|
|
|
|
|
40 |
# Evaluation:
|
41 |
|
42 |
![evaluation.jpg](evaluation.jpg)
|
@@ -45,6 +47,7 @@ Most of the performance data comes from the VLMEvalKit leaderboard or
|
|
45 |
|
46 |
# How to use:
|
47 |
|
|
|
48 |
```python
|
49 |
# pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git
|
50 |
from llava.model.builder import load_pretrained_model
|
|
|
37 |
|
38 |
* Vision Encoder: google/siglip-so400m-patch14-384
|
39 |
|
40 |
+
* Notebook demo: [Ivy-VL-demo.ipynb](https://colab.research.google.com/drive/1D5_8sDRcP1HKlWtlqTH7s64xG8OH9NH0?usp=sharing)
|
41 |
+
|
42 |
# Evaluation:
|
43 |
|
44 |
![evaluation.jpg](evaluation.jpg)
|
|
|
47 |
|
48 |
# How to use:
|
49 |
|
50 |
+
|
51 |
```python
|
52 |
# pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git
|
53 |
from llava.model.builder import load_pretrained_model
|