Update README.md
Browse filesI added a very basic templating support into llava-cli which is triggered if you use `<image>` in a prompt
Example: `-e -p "<|start_header_id|>user<|end_header_id|>\n\n<image>\nDescribe this image<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"`
If you do not use the template llava-cli will use the llava-1.5 type of SYSTEM and USER/ASSISTANT prompt which are not going to yield good results and can even make this model output garbage in some cases.
Please verify the template looks fine, I've not included the empty system prompt, I added a newline after the image
README.md
CHANGED
@@ -90,10 +90,10 @@ Note: llava-llama-3-8b-v1_1 uses the Llama-3-instruct chat template.
|
|
90 |
|
91 |
```bash
|
92 |
# fp16
|
93 |
-
./llava-cli -m ./llava-llama-3-8b-v1_1-f16.gguf --mmproj ./llava-llama-3-8b-v1_1-mmproj-f16.gguf --image YOUR_IMAGE.jpg -c 4096
|
94 |
|
95 |
# int4
|
96 |
-
./llava-cli -m ./llava-llama-3-8b-v1_1-int4.gguf --mmproj ./llava-llama-3-8b-v1_1-mmproj-f16.gguf --image YOUR_IMAGE.jpg -c 4096
|
97 |
```
|
98 |
|
99 |
### Reproduce
|
|
|
90 |
|
91 |
```bash
|
92 |
# fp16
|
93 |
+
./llava-cli -m ./llava-llama-3-8b-v1_1-f16.gguf --mmproj ./llava-llama-3-8b-v1_1-mmproj-f16.gguf --image YOUR_IMAGE.jpg -c 4096 -e -p "<|start_header_id|>user<|end_header_id|>\n\n<image>\nDescribe this image<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
|
94 |
|
95 |
# int4
|
96 |
+
./llava-cli -m ./llava-llama-3-8b-v1_1-int4.gguf --mmproj ./llava-llama-3-8b-v1_1-mmproj-f16.gguf --image YOUR_IMAGE.jpg -c 4096 -e -p "<|start_header_id|>user<|end_header_id|>\n\n<image>\nDescribe this image<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
|
97 |
```
|
98 |
|
99 |
### Reproduce
|