Update README.md
Browse files
README.md
CHANGED
@@ -42,13 +42,30 @@ llava-llama-3-8b-v1_1-hf is a LLaVA model fine-tuned from [meta-llama/Meta-Llama
|
|
42 |
|
43 |
## QuickStart
|
44 |
|
45 |
-
### Chat
|
46 |
|
47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
|
49 |
-
|
|
|
|
|
|
|
|
|
50 |
|
51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
|
53 |
|
54 |
## Citation
|
|
|
42 |
|
43 |
## QuickStart
|
44 |
|
45 |
+
### Chat with lmdeploy
|
46 |
|
47 |
+
1. Installation
|
48 |
+
```
|
49 |
+
pip install lmdeploy>=0.4.0
|
50 |
+
pip install git+https://github.com/haotian-liu/LLaVA.git
|
51 |
+
```
|
52 |
+
|
53 |
+
2. Run
|
54 |
|
55 |
+
```python
|
56 |
+
from lmdeploy import pipeline, ChatTemplateConfig
|
57 |
+
from lmdeploy.vl import load_image
|
58 |
+
pipe = pipeline('xtuner/llava-llama-3-8b-v1_1-hf',
|
59 |
+
chat_template_config=ChatTemplateConfig(model_name='llama3'))
|
60 |
|
61 |
+
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
|
62 |
+
response = pipe(('describe this image', image))
|
63 |
+
print(response)
|
64 |
+
```
|
65 |
+
|
66 |
+
### Chat with CLI
|
67 |
+
|
68 |
+
See [here](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-hf/discussions/1)!
|
69 |
|
70 |
|
71 |
## Citation
|