Update README.md
Browse files
README.md
CHANGED
@@ -43,13 +43,31 @@ llava-llama-3-8b-hf is a LLaVA model fine-tuned from [meta-llama/Meta-Llama-3-8B
|
|
43 |
|
44 |
## QuickStart
|
45 |
|
46 |
-
### Chat
|
47 |
|
48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
-
[lmdeploy](https://github.com/InternLM/lmdeploy) v0.4.0 will support the deployment of this model and it will be published in ~7 days. Please stay tuned:)
|
53 |
|
54 |
## Citation
|
55 |
|
|
|
43 |
|
44 |
## QuickStart
|
45 |
|
46 |
+
### Chat with lmdeploy
|
47 |
|
48 |
+
1. Installation
|
49 |
+
```
|
50 |
+
pip install lmdeploy>=0.4.0
|
51 |
+
pip install git+https://github.com/haotian-liu/LLaVA.git
|
52 |
+
```
|
53 |
+
|
54 |
+
2. Run
|
55 |
+
|
56 |
+
```python
|
57 |
+
from lmdeploy import pipeline, ChatTemplateConfig
|
58 |
+
from lmdeploy.vl import load_image
|
59 |
+
pipe = pipeline('xtuner/llava-llama-3-8b-hf',
|
60 |
+
chat_template_config=ChatTemplateConfig(model_name='llama3'))
|
61 |
|
62 |
+
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
|
63 |
+
response = pipe(('describe this image', image))
|
64 |
+
print(response)
|
65 |
+
```
|
66 |
+
|
67 |
+
### Chat with CLI
|
68 |
+
|
69 |
+
See [here](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-hf/discussions/1)!
|
70 |
|
|
|
71 |
|
72 |
## Citation
|
73 |
|