Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,8 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
|
|
|
|
4 |
---
|
5 |
|
6 |
<!-- header start -->
|
@@ -29,6 +31,14 @@ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com
|
|
29 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GGML)
|
30 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2)
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
## How to easily download and use this model in text-generation-webui
|
33 |
|
34 |
Please make sure you're using the latest version of text-generation-webui
|
@@ -108,8 +118,6 @@ print(pipe(prompt_template)[0]['generated_text'])
|
|
108 |
|
109 |
This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
|
110 |
|
111 |
-
|
112 |
-
|
113 |
* `airoboros-13b-gpt4-1.2-GPTQ-4bit-128g.act.order.safetensors`
|
114 |
* Works with AutoGPTQ in CUDA or Triton modes.
|
115 |
* Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
4 |
+
datasets:
|
5 |
+
- jondurbin/airoboros-gpt4-1.2
|
6 |
---
|
7 |
|
8 |
<!-- header start -->
|
|
|
31 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GGML)
|
32 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2)
|
33 |
|
34 |
+
## Prompt template
|
35 |
+
|
36 |
+
```
|
37 |
+
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
|
38 |
+
USER: prompt
|
39 |
+
ASSISTANT:
|
40 |
+
```
|
41 |
+
|
42 |
## How to easily download and use this model in text-generation-webui
|
43 |
|
44 |
Please make sure you're using the latest version of text-generation-webui
|
|
|
118 |
|
119 |
This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
|
120 |
|
|
|
|
|
121 |
* `airoboros-13b-gpt4-1.2-GPTQ-4bit-128g.act.order.safetensors`
|
122 |
* Works with AutoGPTQ in CUDA or Triton modes.
|
123 |
* Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
|