legraphista commited on
Commit
18da844
β€’
1 Parent(s): 41f0137

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +159 -0
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: CohereForAI/c4ai-command-r-plus-08-2024
3
+ extra_gated_prompt: "By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license)\
4
+ \ and acknowledge that the information you provide will be collected, used, and\
5
+ \ shared in accordance with Cohere\u2019s [Privacy Policy]( https://cohere.com/privacy)."
6
+ inference: false
7
+ language:
8
+ - en
9
+ - fr
10
+ - de
11
+ - es
12
+ - it
13
+ - pt
14
+ - ja
15
+ - ko
16
+ - zh
17
+ - ar
18
+ library_name: gguf
19
+ license: cc-by-nc-4.0
20
+ pipeline_tag: text-generation
21
+ quantized_by: legraphista
22
+ tags:
23
+ - quantized
24
+ - GGUF
25
+ - quantization
26
+ - imat
27
+ - imatrix
28
+ - static
29
+ - 16bit
30
+ - 8bit
31
+ - 6bit
32
+ - 5bit
33
+ - 4bit
34
+ - 3bit
35
+ - 2bit
36
+ - 1bit
37
+ ---
38
+
39
+ # c4ai-command-r-plus-08-2024-IMat-GGUF
40
+ _Llama.cpp imatrix quantization of CohereForAI/c4ai-command-r-plus-08-2024_
41
+
42
+ Original Model: [CohereForAI/c4ai-command-r-plus-08-2024](https://huggingface.co/CohereForAI/c4ai-command-r-plus-08-2024)
43
+ Original dtype: `FP16` (`float16`)
44
+ Quantized by: llama.cpp [b3645](https://github.com/ggerganov/llama.cpp/releases/tag/b3645)
45
+ IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
46
+
47
+ - [Files](#files)
48
+ - [IMatrix](#imatrix)
49
+ - [Common Quants](#common-quants)
50
+ - [All Quants](#all-quants)
51
+ - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
52
+ - [Inference](#inference)
53
+ - [Simple chat template](#simple-chat-template)
54
+ - [Chat template with system prompt](#chat-template-with-system-prompt)
55
+ - [Llama.cpp](#llama-cpp)
56
+ - [FAQ](#faq)
57
+ - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
58
+ - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
59
+
60
+ ---
61
+
62
+ ## Files
63
+
64
+ ### IMatrix
65
+ Status: ⏳ Processing
66
+ Link: [here](https://huggingface.co/legraphista/c4ai-command-r-plus-08-2024-IMat-GGUF/blob/main/imatrix.dat)
67
+
68
+ ### Common Quants
69
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
70
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
71
+ | c4ai-command-r-plus-08-2024.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ Static | -
72
+ | c4ai-command-r-plus-08-2024.Q6_K | Q6_K | - | ⏳ Processing | βšͺ Static | -
73
+ | c4ai-command-r-plus-08-2024.Q4_K | Q4_K | - | ⏳ Processing | 🟒 IMatrix | -
74
+ | c4ai-command-r-plus-08-2024.Q3_K | Q3_K | - | ⏳ Processing | 🟒 IMatrix | -
75
+ | c4ai-command-r-plus-08-2024.Q2_K | Q2_K | - | ⏳ Processing | 🟒 IMatrix | -
76
+
77
+
78
+ ### All Quants
79
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
80
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
81
+ | c4ai-command-r-plus-08-2024.FP16 | F16 | - | ⏳ Processing | βšͺ Static | -
82
+ | c4ai-command-r-plus-08-2024.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ Static | -
83
+ | c4ai-command-r-plus-08-2024.Q6_K | Q6_K | - | ⏳ Processing | βšͺ Static | -
84
+ | c4ai-command-r-plus-08-2024.Q5_K | Q5_K | - | ⏳ Processing | βšͺ Static | -
85
+ | c4ai-command-r-plus-08-2024.Q5_K_S | Q5_K_S | - | ⏳ Processing | βšͺ Static | -
86
+ | c4ai-command-r-plus-08-2024.Q4_K | Q4_K | - | ⏳ Processing | 🟒 IMatrix | -
87
+ | c4ai-command-r-plus-08-2024.Q4_K_S | Q4_K_S | - | ⏳ Processing | 🟒 IMatrix | -
88
+ | c4ai-command-r-plus-08-2024.IQ4_NL | IQ4_NL | - | ⏳ Processing | 🟒 IMatrix | -
89
+ | c4ai-command-r-plus-08-2024.IQ4_XS | IQ4_XS | - | ⏳ Processing | 🟒 IMatrix | -
90
+ | c4ai-command-r-plus-08-2024.Q3_K | Q3_K | - | ⏳ Processing | 🟒 IMatrix | -
91
+ | c4ai-command-r-plus-08-2024.Q3_K_L | Q3_K_L | - | ⏳ Processing | 🟒 IMatrix | -
92
+ | c4ai-command-r-plus-08-2024.Q3_K_S | Q3_K_S | - | ⏳ Processing | 🟒 IMatrix | -
93
+ | c4ai-command-r-plus-08-2024.IQ3_M | IQ3_M | - | ⏳ Processing | 🟒 IMatrix | -
94
+ | c4ai-command-r-plus-08-2024.IQ3_S | IQ3_S | - | ⏳ Processing | 🟒 IMatrix | -
95
+ | c4ai-command-r-plus-08-2024.IQ3_XS | IQ3_XS | - | ⏳ Processing | 🟒 IMatrix | -
96
+ | c4ai-command-r-plus-08-2024.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟒 IMatrix | -
97
+ | c4ai-command-r-plus-08-2024.Q2_K | Q2_K | - | ⏳ Processing | 🟒 IMatrix | -
98
+ | c4ai-command-r-plus-08-2024.Q2_K_S | Q2_K_S | - | ⏳ Processing | 🟒 IMatrix | -
99
+ | c4ai-command-r-plus-08-2024.IQ2_M | IQ2_M | - | ⏳ Processing | 🟒 IMatrix | -
100
+ | c4ai-command-r-plus-08-2024.IQ2_S | IQ2_S | - | ⏳ Processing | 🟒 IMatrix | -
101
+ | c4ai-command-r-plus-08-2024.IQ2_XS | IQ2_XS | - | ⏳ Processing | 🟒 IMatrix | -
102
+ | c4ai-command-r-plus-08-2024.IQ2_XXS | IQ2_XXS | - | ⏳ Processing | 🟒 IMatrix | -
103
+ | c4ai-command-r-plus-08-2024.IQ1_M | IQ1_M | - | ⏳ Processing | 🟒 IMatrix | -
104
+ | c4ai-command-r-plus-08-2024.IQ1_S | IQ1_S | - | ⏳ Processing | 🟒 IMatrix | -
105
+
106
+
107
+ ## Downloading using huggingface-cli
108
+ If you do not have hugginface-cli installed:
109
+ ```
110
+ pip install -U "huggingface_hub[cli]"
111
+ ```
112
+ Download the specific file you want:
113
+ ```
114
+ huggingface-cli download legraphista/c4ai-command-r-plus-08-2024-IMat-GGUF --include "c4ai-command-r-plus-08-2024.Q8_0.gguf" --local-dir ./
115
+ ```
116
+ If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
117
+ ```
118
+ huggingface-cli download legraphista/c4ai-command-r-plus-08-2024-IMat-GGUF --include "c4ai-command-r-plus-08-2024.Q8_0/*" --local-dir ./
119
+ # see FAQ for merging GGUF's
120
+ ```
121
+
122
+ ---
123
+
124
+ ## Inference
125
+
126
+ ### Simple chat template
127
+ ```
128
+ <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{user_prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>{assistant_response}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{next_user_prompt}<|END_OF_TURN_TOKEN|>
129
+ ```
130
+
131
+ ### Chat template with system prompt
132
+ ```
133
+ <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>{system_prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{user_prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>{assistant_response}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{next_user_prompt}<|END_OF_TURN_TOKEN|>
134
+ ```
135
+
136
+ ### Llama.cpp
137
+ ```
138
+ llama.cpp/main -m c4ai-command-r-plus-08-2024.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
139
+ ```
140
+
141
+ ---
142
+
143
+ ## FAQ
144
+
145
+ ### Why is the IMatrix not applied everywhere?
146
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
147
+
148
+ ### How do I merge a split GGUF?
149
+ 1. Make sure you have `gguf-split` available
150
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
151
+ - Download the appropriate zip for your system from the latest release
152
+ - Unzip the archive and you should be able to find `gguf-split`
153
+ 2. Locate your GGUF chunks folder (ex: `c4ai-command-r-plus-08-2024.Q8_0`)
154
+ 3. Run `gguf-split --merge c4ai-command-r-plus-08-2024.Q8_0/c4ai-command-r-plus-08-2024.Q8_0-00001-of-XXXXX.gguf c4ai-command-r-plus-08-2024.Q8_0.gguf`
155
+ - Make sure to point `gguf-split` to the first chunk of the split.
156
+
157
+ ---
158
+
159
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!