maddes8cht commited on
Commit
784c095
1 Parent(s): 83afd4f

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +36 -30
README.md CHANGED
@@ -1,50 +1,50 @@
1
  ---
2
- inference: false
3
- license: apache-2.0
4
- model_creator: tiiuae
5
- model_link: https://huggingface.co/tiiuae/falcon-40b-instruct
6
- model_name: Falcon 40B Instruct
7
- model_type: falcon
8
- pipeline_tag: text-generation
9
- quantized_by: maddes8cht
10
  datasets:
11
  - tiiuae/falcon-refinedweb
12
  language:
13
  - en
14
- tags:
15
- - falcon
16
  ---
17
- ![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)
 
18
  ## I am still building the structure of these descriptions.
19
- These will carry increasingly more content to help find the best models for a purpose.
20
 
21
- Tiiuae-Falcon 40B instruct is the original instruction following Falcon model from Tiiuae, converted to gguf format.
22
 
23
- ---
24
- # Falcon 40B Instruct - gguf
25
  - Model creator: [tiiuae](https://huggingface.co/tiiuae)
26
- - Original model: [Falcon 40b Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct)
27
 
28
- ---
29
- <details>
30
- <summary> Table of contents
 
 
 
 
 
 
 
 
 
 
31
 
32
- </summary>
33
 
 
 
 
34
 
35
- # Table of contents
36
 
37
- [Original Model Card](#original-model-card-by-tiiuae)
 
 
38
 
39
- </details>
40
 
41
- <details>
42
- <summary> Original Model Card by tiiuae
43
- </summary>
44
-
45
- # Original Model Card by tiiuae
46
- [***Link to original model card***](https://huggingface.co/tiiuae/falcon-40b-instruct)
47
 
 
48
  # ✨ Falcon-40B-Instruct
49
 
50
  **Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) and finetuned on a mixture of [Baize](https://github.com/project-baize/baize-chatbot). It is made available under the Apache 2.0 license.**
@@ -257,4 +257,10 @@ To cite the [Baize](https://github.com/project-baize/baize-chatbot) instruction
257
  Falcon-40B-Instruct is made available under the Apache 2.0 license.
258
 
259
  ## Contact
260
- falconllm@tii.ae
 
 
 
 
 
 
 
1
  ---
 
 
 
 
 
 
 
 
2
  datasets:
3
  - tiiuae/falcon-refinedweb
4
  language:
5
  - en
6
+ inference: false
7
+ license: apache-2.0
8
  ---
9
+ [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
10
+
11
  ## I am still building the structure of these descriptions.
 
12
 
13
+ These will contain increasingly more content to help find the best models for a purpose.
14
 
15
+ # falcon-40b-instruct - GGUF
 
16
  - Model creator: [tiiuae](https://huggingface.co/tiiuae)
17
+ - Original model: [falcon-40b-instruct](https://huggingface.co/tiiuae/falcon-40b-instruct)
18
 
19
+ Tiiuae-Falcon 40B instruct is the original instruction following Falcon model from Tiiuae, converted to gguf format.
20
+
21
+
22
+
23
+ # About GGUF format
24
+
25
+ `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
26
+ A growing list of Software is using it and can therefore use this model.
27
+ The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
28
+
29
+ # Quantization variants
30
+
31
+ There is a bunch of quantized files available. How to choose the best for you:
32
 
33
+ # legacy quants
34
 
35
+ Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
36
+ Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
37
+ Falcon 7B models cannot be quantized to K-quants.
38
 
39
+ # K-quants
40
 
41
+ K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance.
42
+ So, if possible, use K-quants.
43
+ With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
44
 
 
45
 
 
 
 
 
 
 
46
 
47
+ # Original Model Card:
48
  # ✨ Falcon-40B-Instruct
49
 
50
  **Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) and finetuned on a mixture of [Baize](https://github.com/project-baize/baize-chatbot). It is made available under the Apache 2.0 license.**
 
257
  Falcon-40B-Instruct is made available under the Apache 2.0 license.
258
 
259
  ## Contact
260
+ falconllm@tii.ae<center>
261
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
262
+ [![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911)
263
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht)
264
+ [![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht)
265
+ [![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966)
266
+ </center>