Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ The model was quantized with GPTQ-for-LLaMA, without group size to reduce VRAM u
|
|
25 |
|
26 |
## Prompt Format
|
27 |
|
28 |
-
|
29 |
|
30 |
## Compatibility
|
31 |
|
|
|
25 |
|
26 |
## Prompt Format
|
27 |
|
28 |
+
TBD
|
29 |
|
30 |
## Compatibility
|
31 |
|