Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,8 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
|
|
|
|
4 |
---
|
5 |
|
6 |
<!-- header start -->
|
@@ -32,7 +34,15 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
32 |
|
33 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GPTQ)
|
34 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GGML)
|
35 |
-
* [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
<!-- compatibility_ggml start -->
|
38 |
## Compatibility
|
@@ -77,16 +87,39 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
77 |
| airoboros-65B-gpt4-1.2.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB | 51.47 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
78 |
| airoboros-65B-gpt4-1.2.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.24 GB | 48.74 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
79 |
| airoboros-65B-gpt4-1.2.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.92 GB | 47.42 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
80 |
-
|
|
|
81 |
|
82 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
83 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
84 |
## How to run in `llama.cpp`
|
85 |
|
86 |
I use the following command line; adjust for your tastes and needs:
|
87 |
|
88 |
```
|
89 |
-
./main -t 10 -ngl 32 -m airoboros-65B-gpt4-1.2.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "
|
90 |
```
|
91 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
92 |
|
@@ -128,3 +161,71 @@ Thank you to all my generous patrons and donaters!
|
|
128 |
|
129 |
# Original model card: John Durbin's Airoboros 65B GPT4 1.2
|
130 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
4 |
+
datasets:
|
5 |
+
- jondurbin/airoboros-gpt4-1.2
|
6 |
---
|
7 |
|
8 |
<!-- header start -->
|
|
|
34 |
|
35 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GPTQ)
|
36 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GGML)
|
37 |
+
* [Jon Durbin's unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-65b-gpt4-1.2)
|
38 |
+
|
39 |
+
## Prompt template
|
40 |
+
|
41 |
+
```
|
42 |
+
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
|
43 |
+
USER: prompt
|
44 |
+
ASSISTANT:
|
45 |
+
```
|
46 |
|
47 |
<!-- compatibility_ggml start -->
|
48 |
## Compatibility
|
|
|
87 |
| airoboros-65B-gpt4-1.2.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB | 51.47 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
88 |
| airoboros-65B-gpt4-1.2.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.24 GB | 48.74 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
89 |
| airoboros-65B-gpt4-1.2.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.92 GB | 47.42 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
90 |
+
| airoboros-65B-gpt4-1.2.ggmlv3.q6_K.bin | q6_K | 6 | 53.56 GB | 56.06 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
|
91 |
+
| airoboros-65B-gpt4-1.2.ggmlv3.q8_0.bin | q8_0 | 8 | 69.370 GB | 71.87 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
92 |
|
93 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
94 |
|
95 |
+
### q6_K and q8_0 files require expansion from archive
|
96 |
+
|
97 |
+
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, it is just storing the .bin file in two parts.
|
98 |
+
|
99 |
+
### q6_K
|
100 |
+
Please download:
|
101 |
+
* `airoboros-65B-gpt4-1.2.ggmlv3.q6_K.zip`
|
102 |
+
* `airoboros-65B-gpt4-1.2.ggmlv3.q6_K.z01`
|
103 |
+
|
104 |
+
### q8_0
|
105 |
+
Please download:
|
106 |
+
* `airoboros-65B-gpt4-1.2.ggmlv3.q8_0.zip`
|
107 |
+
* `airoboros-65B-gpt4-1.2.ggmlv3.q8_0.z01`
|
108 |
+
|
109 |
+
Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
|
110 |
+
```
|
111 |
+
sudo apt update -y && sudo apt install 7zip
|
112 |
+
7zz x airoboros-65B-gpt4-1.2.ggmlv3.q6_K.zip`
|
113 |
+
```
|
114 |
+
|
115 |
+
Once the `.bin` is extracted you can delete the `.zip` and `.z01` files
|
116 |
+
|
117 |
## How to run in `llama.cpp`
|
118 |
|
119 |
I use the following command line; adjust for your tastes and needs:
|
120 |
|
121 |
```
|
122 |
+
./main -t 10 -ngl 32 -m airoboros-65B-gpt4-1.2.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: Write a story about llamas\nASSISTANT:"
|
123 |
```
|
124 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
125 |
|
|
|
161 |
|
162 |
# Original model card: John Durbin's Airoboros 65B GPT4 1.2
|
163 |
|
164 |
+
### Overview
|
165 |
+
|
166 |
+
This is a qlora fine-tuned 65b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
|
167 |
+
|
168 |
+
This is mostly an extension of [1.1](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.1), but with a 65b model and thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc.
|
169 |
+
|
170 |
+
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on:
|
171 |
+
- coding
|
172 |
+
- math/reasoning (using orca style ELI5 instruction/response pairs)
|
173 |
+
- trivia
|
174 |
+
- role playing
|
175 |
+
- multiple choice and fill-in-the-blank
|
176 |
+
- context-obedient question answering
|
177 |
+
- theory of mind
|
178 |
+
- misc/general
|
179 |
+
|
180 |
+
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the 7b/13b versions:
|
181 |
+
|
182 |
+
```
|
183 |
+
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
|
184 |
+
```
|
185 |
+
|
186 |
+
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
|
187 |
+
|
188 |
+
### Usage
|
189 |
+
|
190 |
+
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
|
191 |
+
```
|
192 |
+
pip install git+https://github.com/jondurbin/FastChat
|
193 |
+
```
|
194 |
+
|
195 |
+
Be sure you are pulling the latest branch!
|
196 |
+
|
197 |
+
Then, you can invoke it like so (after downloading the model):
|
198 |
+
```
|
199 |
+
python -m fastchat.serve.cli \
|
200 |
+
--model-path airoboros-65b-gpt4-1.2 \
|
201 |
+
--temperature 0.5 \
|
202 |
+
--max-new-tokens 2048 \
|
203 |
+
--no-history
|
204 |
+
```
|
205 |
+
|
206 |
+
Alternatively, please check out TheBloke's quantized versions:
|
207 |
+
|
208 |
+
- https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GPTQ
|
209 |
+
- https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GGML
|
210 |
+
|
211 |
+
### Coding updates from gpt4/1.1:
|
212 |
+
|
213 |
+
I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting.
|
214 |
+
|
215 |
+
It's not guaranteed to work all the time, but mostly it does seem to work as expected.
|
216 |
+
|
217 |
+
So for example, instead of:
|
218 |
+
```
|
219 |
+
Implement the Snake game in python.
|
220 |
+
```
|
221 |
+
|
222 |
+
You would use:
|
223 |
+
```
|
224 |
+
Implement the Snake game in python. PLAINFORMAT
|
225 |
+
```
|
226 |
+
|
227 |
+
### Other updates from gpt4/1.1:
|
228 |
+
|
229 |
+
- Several hundred role-playing data.
|
230 |
+
- A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question).
|
231 |
+
- Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.)
|