Initial GPTQ model commit
Browse files
README.md
CHANGED
@@ -1,8 +1,6 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
4 |
-
datasets:
|
5 |
-
- jondurbin/airoboros-gpt4-1.2
|
6 |
---
|
7 |
|
8 |
<!-- header start -->
|
@@ -29,15 +27,7 @@ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com
|
|
29 |
|
30 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GPTQ)
|
31 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GGML)
|
32 |
-
* [Unquantised
|
33 |
-
|
34 |
-
## Prompt template
|
35 |
-
|
36 |
-
```
|
37 |
-
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
|
38 |
-
USER: prompt
|
39 |
-
ASSISTANT:
|
40 |
-
```
|
41 |
|
42 |
## How to easily download and use this model in text-generation-webui
|
43 |
|
@@ -118,6 +108,8 @@ print(pipe(prompt_template)[0]['generated_text'])
|
|
118 |
|
119 |
This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
|
120 |
|
|
|
|
|
121 |
* `airoboros-13b-gpt4-1.2-GPTQ-4bit-128g.act.order.safetensors`
|
122 |
* Works with AutoGPTQ in CUDA or Triton modes.
|
123 |
* Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
|
|
|
|
4 |
---
|
5 |
|
6 |
<!-- header start -->
|
|
|
27 |
|
28 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GPTQ)
|
29 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GGML)
|
30 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
## How to easily download and use this model in text-generation-webui
|
33 |
|
|
|
108 |
|
109 |
This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
|
110 |
|
111 |
+
|
112 |
+
|
113 |
* `airoboros-13b-gpt4-1.2-GPTQ-4bit-128g.act.order.safetensors`
|
114 |
* Works with AutoGPTQ in CUDA or Triton modes.
|
115 |
* Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
|