InferenceIllusionist commited on
Commit
d3c70c7
1 Parent(s): 59e5718

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -35,7 +35,7 @@ An initial foray into the world of fine-tuning. The goal of this release was to
35
  * [Quantized - Limited VRAM Option (197mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mistral-7b-mmproj-v1.5-Q4_1.gguf?download=true)
36
  * [Unquantized - Premium Option / Best Quality (596mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mmproj-model-f16.gguf?download=true)
37
 
38
- Select the gguf file of your choice in Kobold as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
39
  <img src="https://i.imgur.com/x8vqH29.png" width="425"/>
40
 
41
  ## Prompt Format
 
35
  * [Quantized - Limited VRAM Option (197mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mistral-7b-mmproj-v1.5-Q4_1.gguf?download=true)
36
  * [Unquantized - Premium Option / Best Quality (596mb)](https://huggingface.co/InferenceIllusionist/Excalibur-7b-DPO-GGUF/resolve/main/mmproj-model-f16.gguf?download=true)
37
 
38
+ Select the gguf file of your choice in [Koboldcpp](https://github.com/LostRuins/koboldcpp/releases/) as usual, then make sure to choose the mmproj file above in the LLaVA mmproj field of the model submenu:
39
  <img src="https://i.imgur.com/x8vqH29.png" width="425"/>
40
 
41
  ## Prompt Format