Update README.md
Browse files
README.md
CHANGED
@@ -40,6 +40,13 @@ They can be used from:
|
|
40 |
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers).
|
41 |
* A new fork of llama.cpp that introduced this new Falcon GGML support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp).
|
42 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
## Repositories available
|
44 |
|
45 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2-GPTQ)
|
|
|
40 |
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers).
|
41 |
* A new fork of llama.cpp that introduced this new Falcon GGML support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp).
|
42 |
|
43 |
+
## Prompt template
|
44 |
+
|
45 |
+
```
|
46 |
+
"<|prompt|>prompt<|endoftext|>
|
47 |
+
<|answer|>"
|
48 |
+
```
|
49 |
+
|
50 |
## Repositories available
|
51 |
|
52 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2-GPTQ)
|