Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,8 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
|
|
|
|
4 |
---
|
5 |
|
6 |
<!-- header start -->
|
@@ -27,7 +29,15 @@ It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQi
|
|
27 |
|
28 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GPTQ)
|
29 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GGML)
|
30 |
-
* [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
## How to easily download and use this model in text-generation-webui
|
33 |
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
4 |
+
datasets:
|
5 |
+
- jondurbin/airoboros-gpt4-1.2
|
6 |
---
|
7 |
|
8 |
<!-- header start -->
|
|
|
29 |
|
30 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GPTQ)
|
31 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GGML)
|
32 |
+
* [John Durbin's unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-65b-gpt4-1.2)
|
33 |
+
|
34 |
+
## Prompt template
|
35 |
+
|
36 |
+
```
|
37 |
+
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
|
38 |
+
USER: prompt
|
39 |
+
ASSISTANT:
|
40 |
+
```
|
41 |
|
42 |
## How to easily download and use this model in text-generation-webui
|
43 |
|