TheBloke commited on
Commit
115ce3d
β€’
1 Parent(s): edbb1d6

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +337 -0
README.md ADDED
@@ -0,0 +1,337 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # Falcon 7B-Instruct GGML GGML
21
+
22
+ These files are GGML format model files for [Falcon 7B-Instruct GGML](https://huggingface.co/tiiuae/falcon-7b-instruct).
23
+
24
+ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
26
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp)
27
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
28
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
29
+ * [ctransformers](https://github.com/marella/ctransformers)
30
+
31
+ ## Repositories available
32
+
33
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/falcon-7B-instruct-GPTQ)
34
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/falcon-7B-instruct-GGML)
35
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/tiiuae/falcon-7b-instruct)
36
+
37
+ <!-- compatibility_ggml start -->
38
+ ## Compatibility
39
+
40
+ ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
41
+
42
+ I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
43
+
44
+ These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
45
+
46
+ ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
47
+
48
+ These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
49
+
50
+ They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
51
+
52
+ ## Explanation of the new k-quant methods
53
+
54
+ The new methods available are:
55
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
56
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
57
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
58
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
59
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
60
+ * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
61
+
62
+ Refer to the Provided Files table below to see what files use which methods, and how.
63
+ <!-- compatibility_ggml end -->
64
+
65
+ ## Provided files
66
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
67
+ | ---- | ---- | ---- | ---- | ---- | ----- |
68
+ | falcon7b-instruct.ggmlv3.fp16.bin | fp16 | 16 | 14.44 GB | 16.94 GB | 16-bit. |
69
+ | falcon7b-instruct.ggmlv3.q4_0.bin | q4_0 | 4 | 4.06 GB | 6.56 GB | Original llama.cpp quant method, 4-bit. |
70
+ | falcon7b-instruct.ggmlv3.q4_1.bin | q4_1 | 4 | 4.51 GB | 7.01 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
71
+ | falcon7b-instruct.ggmlv3.q5_0.bin | q5_0 | 5 | 4.96 GB | 7.46 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
72
+ | falcon7b-instruct.ggmlv3.q5_1.bin | q5_1 | 5 | 5.41 GB | 7.91 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
73
+ | falcon7b-instruct.ggmlv3.q8_0.bin | q8_0 | 8 | 7.67 GB | 10.17 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
74
+
75
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
76
+
77
+ ## How to run in `llama.cpp`
78
+
79
+ I use the following command line; adjust for your tastes and needs:
80
+
81
+ ```
82
+ ./main -t 10 -ngl 32 -m gpt4-x-alpaca-13b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
83
+ ```
84
+ If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
85
+
86
+ If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
87
+
88
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
89
+
90
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
91
+
92
+ ## How to run in `text-generation-webui`
93
+
94
+ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
95
+
96
+ <!-- footer start -->
97
+ ## Discord
98
+
99
+ For further support, and discussions on these models and AI in general, join us at:
100
+
101
+ [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
102
+
103
+ ## Thanks, and how to contribute.
104
+
105
+ Thanks to the [chirper.ai](https://chirper.ai) team!
106
+
107
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
108
+
109
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
110
+
111
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
112
+
113
+ * Patreon: https://patreon.com/TheBlokeAI
114
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
115
+
116
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
117
+
118
+ **Patreon special mentions**: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
119
+
120
+ Thank you to all my generous patrons and donaters!
121
+
122
+ <!-- footer end -->
123
+
124
+ # Original model card: Falcon 7B-Instruct GGML
125
+
126
+
127
+ # ✨ Falcon-7B-Instruct
128
+
129
+ **Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
130
+
131
+ *Paper coming soon 😊.*
132
+
133
+ πŸ€— To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
134
+
135
+ ## Why use Falcon-7B-Instruct?
136
+
137
+ * **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
138
+ * **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
139
+ * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
140
+
141
+ πŸ’¬ **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
142
+
143
+ πŸ”₯ **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
144
+
145
+ ```python
146
+ from transformers import AutoTokenizer, AutoModelForCausalLM
147
+ import transformers
148
+ import torch
149
+
150
+ model = "tiiuae/falcon-7b-instruct"
151
+
152
+ tokenizer = AutoTokenizer.from_pretrained(model)
153
+ pipeline = transformers.pipeline(
154
+ "text-generation",
155
+ model=model,
156
+ tokenizer=tokenizer,
157
+ torch_dtype=torch.bfloat16,
158
+ trust_remote_code=True,
159
+ device_map="auto",
160
+ )
161
+ sequences = pipeline(
162
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
163
+ max_length=200,
164
+ do_sample=True,
165
+ top_k=10,
166
+ num_return_sequences=1,
167
+ eos_token_id=tokenizer.eos_token_id,
168
+ )
169
+ for seq in sequences:
170
+ print(f"Result: {seq['generated_text']}")
171
+
172
+ ```
173
+
174
+ πŸ’₯ **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
175
+
176
+ For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
177
+
178
+ You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
179
+
180
+
181
+ # Model Card for Falcon-7B-Instruct
182
+
183
+ ## Model Details
184
+
185
+ ### Model Description
186
+
187
+ - **Developed by:** [https://www.tii.ae](https://www.tii.ae);
188
+ - **Model type:** Causal decoder-only;
189
+ - **Language(s) (NLP):** English and French;
190
+ - **License:** Apache 2.0;
191
+ - **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
192
+
193
+ ### Model Source
194
+
195
+ - **Paper:** *coming soon*.
196
+
197
+ ## Uses
198
+
199
+ ### Direct Use
200
+
201
+ Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
202
+
203
+ ### Out-of-Scope Use
204
+
205
+ Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
206
+
207
+ ## Bias, Risks, and Limitations
208
+
209
+ Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
210
+
211
+ ### Recommendations
212
+
213
+ We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
214
+
215
+ ## How to Get Started with the Model
216
+
217
+
218
+ ```python
219
+ from transformers import AutoTokenizer, AutoModelForCausalLM
220
+ import transformers
221
+ import torch
222
+
223
+ model = "tiiuae/falcon-7b-instruct"
224
+
225
+ tokenizer = AutoTokenizer.from_pretrained(model)
226
+ pipeline = transformers.pipeline(
227
+ "text-generation",
228
+ model=model,
229
+ tokenizer=tokenizer,
230
+ torch_dtype=torch.bfloat16,
231
+ trust_remote_code=True,
232
+ device_map="auto",
233
+ )
234
+ sequences = pipeline(
235
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
236
+ max_length=200,
237
+ do_sample=True,
238
+ top_k=10,
239
+ num_return_sequences=1,
240
+ eos_token_id=tokenizer.eos_token_id,
241
+ )
242
+ for seq in sequences:
243
+ print(f"Result: {seq['generated_text']}")
244
+
245
+ ```
246
+
247
+ ## Training Details
248
+
249
+ ### Training Data
250
+
251
+ Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
252
+
253
+ | **Data source** | **Fraction** | **Tokens** | **Description** |
254
+ |--------------------|--------------|------------|-----------------------------------|
255
+ | [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
256
+ | [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
257
+ | [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
258
+ | [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
259
+
260
+
261
+ The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
262
+
263
+
264
+ ## Evaluation
265
+
266
+ *Paper coming soon.*
267
+
268
+ See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
269
+
270
+ Note that this model variant is not optimized for NLP benchmarks.
271
+
272
+
273
+ ## Technical Specifications
274
+
275
+ For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
276
+
277
+ ### Model Architecture and Objective
278
+
279
+ Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
280
+
281
+ The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
282
+
283
+ * **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
284
+ * **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
285
+ * **Decoder-block:** parallel attention/MLP with a single layer norm.
286
+
287
+ | **Hyperparameter** | **Value** | **Comment** |
288
+ |--------------------|-----------|----------------------------------------|
289
+ | Layers | 32 | |
290
+ | `d_model` | 4544 | Increased to compensate for multiquery |
291
+ | `head_dim` | 64 | Reduced to optimise for FlashAttention |
292
+ | Vocabulary | 65024 | |
293
+ | Sequence length | 2048 | |
294
+
295
+ ### Compute Infrastructure
296
+
297
+ #### Hardware
298
+
299
+ Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
300
+
301
+ #### Software
302
+
303
+ Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
304
+
305
+
306
+ ## Citation
307
+
308
+ *Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
309
+ ```
310
+ @article{falcon40b,
311
+ title={{Falcon-40B}: an open large language model with state-of-the-art performance},
312
+ author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
313
+ year={2023}
314
+ }
315
+ ```
316
+
317
+ To learn more about the pretraining dataset, see the πŸ““ [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
318
+
319
+ ```
320
+ @article{refinedweb,
321
+ title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
322
+ author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
323
+ journal={arXiv preprint arXiv:2306.01116},
324
+ eprint={2306.01116},
325
+ eprinttype = {arXiv},
326
+ url={https://arxiv.org/abs/2306.01116},
327
+ year={2023}
328
+ }
329
+ ```
330
+
331
+
332
+ ## License
333
+
334
+ Falcon-7B-Instruct is made available under the Apache 2.0 license.
335
+
336
+ ## Contact
337
+ falconllm@tii.ae