Initial GGML model commit
Browse files
README.md
ADDED
@@ -0,0 +1,252 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
license: other
|
4 |
+
---
|
5 |
+
|
6 |
+
<!-- header start -->
|
7 |
+
<div style="width: 100%;">
|
8 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
9 |
+
</div>
|
10 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
+
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
|
13 |
+
</div>
|
14 |
+
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
+
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
16 |
+
</div>
|
17 |
+
</div>
|
18 |
+
<!-- header end -->
|
19 |
+
|
20 |
+
# Allen AI's Tulu 7B GGML
|
21 |
+
|
22 |
+
These files are GGML format model files for [Allen AI's Tulu 7B](https://huggingface.co/allenai/tulu-7b).
|
23 |
+
|
24 |
+
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
25 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
26 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
|
27 |
+
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
|
28 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
|
29 |
+
* [ctransformers](https://github.com/marella/ctransformers)
|
30 |
+
|
31 |
+
## Repositories available
|
32 |
+
|
33 |
+
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/tulu-7B-GPTQ)
|
34 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-7B-GGML)
|
35 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-7B-fp16)
|
36 |
+
|
37 |
+
<!-- compatibility_ggml start -->
|
38 |
+
## Compatibility
|
39 |
+
|
40 |
+
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
|
41 |
+
|
42 |
+
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
|
43 |
+
|
44 |
+
They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
|
45 |
+
|
46 |
+
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
47 |
+
|
48 |
+
These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
|
49 |
+
|
50 |
+
They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
|
51 |
+
|
52 |
+
## Explanation of the new k-quant methods
|
53 |
+
|
54 |
+
The new methods available are:
|
55 |
+
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
|
56 |
+
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
|
57 |
+
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
|
58 |
+
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
|
59 |
+
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
|
60 |
+
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
|
61 |
+
|
62 |
+
Refer to the Provided Files table below to see what files use which methods, and how.
|
63 |
+
<!-- compatibility_ggml end -->
|
64 |
+
|
65 |
+
## Provided files
|
66 |
+
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
67 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
68 |
+
| tulu-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.80 GB | 5.30 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
69 |
+
| tulu-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.55 GB | 6.05 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
70 |
+
| tulu-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.23 GB | 5.73 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
71 |
+
| tulu-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.90 GB | 5.40 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
72 |
+
| tulu-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
|
73 |
+
| tulu-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
74 |
+
| tulu-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.05 GB | 6.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
75 |
+
| tulu-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.79 GB | 6.29 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
76 |
+
| tulu-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
77 |
+
| tulu-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
78 |
+
| tulu-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.77 GB | 7.27 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
79 |
+
| tulu-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.63 GB | 7.13 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
80 |
+
| tulu-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
|
81 |
+
| tulu-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
82 |
+
|
83 |
+
|
84 |
+
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
85 |
+
|
86 |
+
## How to run in `llama.cpp`
|
87 |
+
|
88 |
+
I use the following command line; adjust for your tastes and needs:
|
89 |
+
|
90 |
+
```
|
91 |
+
./main -t 10 -ngl 32 -m tulu-7b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
|
92 |
+
```
|
93 |
+
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
94 |
+
|
95 |
+
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
96 |
+
|
97 |
+
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
98 |
+
|
99 |
+
## How to run in `text-generation-webui`
|
100 |
+
|
101 |
+
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
102 |
+
|
103 |
+
<!-- footer start -->
|
104 |
+
## Discord
|
105 |
+
|
106 |
+
For further support, and discussions on these models and AI in general, join us at:
|
107 |
+
|
108 |
+
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
|
109 |
+
|
110 |
+
## Thanks, and how to contribute.
|
111 |
+
|
112 |
+
Thanks to the [chirper.ai](https://chirper.ai) team!
|
113 |
+
|
114 |
+
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
115 |
+
|
116 |
+
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
117 |
+
|
118 |
+
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
119 |
+
|
120 |
+
* Patreon: https://patreon.com/TheBlokeAI
|
121 |
+
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
122 |
+
|
123 |
+
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
|
124 |
+
|
125 |
+
**Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
|
126 |
+
|
127 |
+
Thank you to all my generous patrons and donaters!
|
128 |
+
|
129 |
+
<!-- footer end -->
|
130 |
+
|
131 |
+
# Original model card: Allen AI's Tulu 7B
|
132 |
+
|
133 |
+
|
134 |
+
# Tulu 7B
|
135 |
+
|
136 |
+
This model is a 7B LLaMa model finetuned on a mixture of instruction datasets (FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT).
|
137 |
+
*Please note this is a model diff - see below for usage instructions*.
|
138 |
+
|
139 |
+
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
|
140 |
+
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
|
141 |
+
|
142 |
+
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
|
143 |
+
|
144 |
+
## Usage
|
145 |
+
|
146 |
+
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
|
147 |
+
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
|
148 |
+
|
149 |
+
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
|
150 |
+
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
|
151 |
+
|
152 |
+
Then, run:
|
153 |
+
```bash
|
154 |
+
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
|
155 |
+
```
|
156 |
+
|
157 |
+
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
|
158 |
+
|
159 |
+
## Input Format
|
160 |
+
|
161 |
+
The model is trained to use the following format (note the newlines):
|
162 |
+
```
|
163 |
+
<|user|>
|
164 |
+
Your message here!
|
165 |
+
<|assistant|>
|
166 |
+
```
|
167 |
+
|
168 |
+
For best results, format all inputs in this manner.
|
169 |
+
|
170 |
+
## Performance
|
171 |
+
|
172 |
+
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
|
173 |
+
|
174 |
+
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|
175 |
+
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
|
176 |
+
| 44.5 | 47.0 | 6.0 | 27.0 | 38.1 | 39.2 | 45.7 | 7.7 | 17.5 | 27.8 | 48.3 | 33.1 |
|
177 |
+
|
178 |
+
If you use this model, please cite our work, the llama paper, and the original datasets:
|
179 |
+
|
180 |
+
```
|
181 |
+
@misc{wang2023far,
|
182 |
+
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
|
183 |
+
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
|
184 |
+
year={2023},
|
185 |
+
eprint={2306.04751},
|
186 |
+
archivePrefix={arXiv},
|
187 |
+
primaryClass={cs.CL}
|
188 |
+
}
|
189 |
+
```
|
190 |
+
|
191 |
+
```
|
192 |
+
@misc{touvron2023llama,
|
193 |
+
title={LLaMA: Open and Efficient Foundation Language Models},
|
194 |
+
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
|
195 |
+
year={2023},
|
196 |
+
eprint={2302.13971},
|
197 |
+
archivePrefix={arXiv},
|
198 |
+
primaryClass={cs.CL}
|
199 |
+
}
|
200 |
+
```
|
201 |
+
|
202 |
+
```
|
203 |
+
@misc{dolly,
|
204 |
+
author = {Databricks},
|
205 |
+
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
|
206 |
+
year = {2023},
|
207 |
+
publisher = {GitHub},
|
208 |
+
journal = {GitHub repository},
|
209 |
+
howpublished = {Blog post},
|
210 |
+
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
|
211 |
+
}
|
212 |
+
```
|
213 |
+
|
214 |
+
```
|
215 |
+
@article{longpre2023flan,
|
216 |
+
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
|
217 |
+
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
|
218 |
+
journal={arXiv preprint arXiv:2301.13688},
|
219 |
+
year={2023}
|
220 |
+
}
|
221 |
+
```
|
222 |
+
|
223 |
+
```
|
224 |
+
@misc{köpf2023openassistant,
|
225 |
+
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
|
226 |
+
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
|
227 |
+
year={2023},
|
228 |
+
eprint={2304.07327},
|
229 |
+
archivePrefix={arXiv},
|
230 |
+
primaryClass={cs.CL}
|
231 |
+
}
|
232 |
+
```
|
233 |
+
|
234 |
+
```
|
235 |
+
@article{peng2023instruction,
|
236 |
+
title={Instruction Tuning with GPT-4},
|
237 |
+
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
|
238 |
+
journal={arXiv preprint arXiv:2304.03277},
|
239 |
+
year={2023}
|
240 |
+
}
|
241 |
+
```
|
242 |
+
|
243 |
+
```
|
244 |
+
@misc{codealpaca,
|
245 |
+
author = {Sahil Chaudhary},
|
246 |
+
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
|
247 |
+
year = {2023},
|
248 |
+
publisher = {GitHub},
|
249 |
+
journal = {GitHub repository},
|
250 |
+
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
|
251 |
+
}
|
252 |
+
```
|