mradermacher commited on
Commit
b108be9
1 Parent(s): b669657

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -18,7 +18,7 @@ tags:
18
  static quants of https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf
19
 
20
  <!-- provided-files -->
21
- weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
22
  ## Usage
23
 
24
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
@@ -32,8 +32,18 @@ more details, including on how to concatenate multi-part files.
32
  | Link | Type | Size/GB | Notes |
33
  |:-----|:-----|--------:|:------|
34
  | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.Q2_K.gguf) | Q2_K | 2.6 | |
 
35
  | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
 
 
 
 
 
36
  | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
 
 
 
 
37
  | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
38
 
39
  Here is a handy graph by ikawrakow comparing some lower-quality quant
 
18
  static quants of https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf
19
 
20
  <!-- provided-files -->
21
+ weighted/imatrix quants are available at https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-i1-GGUF
22
  ## Usage
23
 
24
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
32
  | Link | Type | Size/GB | Notes |
33
  |:-----|:-----|--------:|:------|
34
  | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.Q2_K.gguf) | Q2_K | 2.6 | |
35
+ | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
36
  | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
37
+ | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
38
+ | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.IQ3_M.gguf) | IQ3_M | 3.2 | |
39
+ | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
40
+ | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
41
+ | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
42
  | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
43
+ | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
44
+ | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
45
+ | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
46
+ | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
47
  | [GGUF](https://huggingface.co/mradermacher/CodeLlama-7b-Instruct-hf-GGUF/resolve/main/CodeLlama-7b-Instruct-hf.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
48
 
49
  Here is a handy graph by ikawrakow comparing some lower-quality quant