apepkuss79
commited on
Commit
•
8a119d1
1
Parent(s):
8262ebc
Update README.md
Browse files
README.md
CHANGED
@@ -31,7 +31,7 @@ language:
|
|
31 |
|
32 |
- Prompt template
|
33 |
|
34 |
-
- Prompt type: `
|
35 |
|
36 |
- Prompt string
|
37 |
|
@@ -52,21 +52,14 @@ language:
|
|
52 |
- Run as LlamaEdge service
|
53 |
|
54 |
```bash
|
55 |
-
wasmedge --dir .:.
|
|
|
56 |
llama-api-server.wasm \
|
57 |
-
--prompt-template
|
58 |
--ctx-size 128000 \
|
59 |
-
--
|
60 |
-
|
61 |
-
|
62 |
-
- Run as LlamaEdge command app
|
63 |
-
|
64 |
-
```bash
|
65 |
-
wasmedge --dir .:. --nn-preload default:GGML:AUTO:MiniCPM-V-2_6-Q5_K_M.gguf \
|
66 |
-
llama-chat.wasm \
|
67 |
-
--prompt-template phi-3-chat \
|
68 |
-
--ctx-size 128000
|
69 |
-
``` -->
|
70 |
|
71 |
## Quantized GGUF Models
|
72 |
|
|
|
31 |
|
32 |
- Prompt template
|
33 |
|
34 |
+
- Prompt type: `minicpmv`
|
35 |
|
36 |
- Prompt string
|
37 |
|
|
|
52 |
- Run as LlamaEdge service
|
53 |
|
54 |
```bash
|
55 |
+
wasmedge --dir .:. \
|
56 |
+
--nn-preload default:GGML:AUTO:MiniCPM-V-2_6-Q5_K_M.gguf \
|
57 |
llama-api-server.wasm \
|
58 |
+
--prompt-template minicpmv \
|
59 |
--ctx-size 128000 \
|
60 |
+
--llava-mmproj mmproj-model-f16.gguf \
|
61 |
+
--model-name minicpmv-26
|
62 |
+
```-->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
63 |
|
64 |
## Quantized GGUF Models
|
65 |
|