Transformers
llama
TheBloke commited on
Commit
4510045
1 Parent(s): afe99bc

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -33,7 +33,7 @@ quantized_by: TheBloke
33
 
34
  This repo contains GGML format model files for [Kai Howard's PuddleJumper 13B](https://huggingface.co/totally-not-an-llm/PuddleJumper-13b).
35
 
36
- ### Important note regarding GGML files
37
 
38
  The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
39
 
@@ -95,17 +95,17 @@ Refer to the Provided Files table below to see what files use which methods, and
95
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
96
  | ---- | ---- | ---- | ---- | ---- | ----- |
97
  | [puddlejumper-13b.ggmlv3.Q2_K.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q2_K.bin) | Q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
98
- | [puddlejumper-13b.ggmlv3.Q3_K_L.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q3_K_L.bin) | Q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
99
- | [puddlejumper-13b.ggmlv3.Q3_K_M.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q3_K_M.bin) | Q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
100
  | [puddlejumper-13b.ggmlv3.Q3_K_S.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q3_K_S.bin) | Q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
 
 
101
  | [puddlejumper-13b.ggmlv3.Q4_0.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q4_0.bin) | Q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
102
- | [puddlejumper-13b.ggmlv3.Q4_1.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q4_1.bin) | Q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
103
- | [puddlejumper-13b.ggmlv3.Q4_K_M.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q4_K_M.bin) | Q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
104
  | [puddlejumper-13b.ggmlv3.Q4_K_S.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q4_K_S.bin) | Q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
 
 
105
  | [puddlejumper-13b.ggmlv3.Q5_0.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q5_0.bin) | Q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
106
- | [puddlejumper-13b.ggmlv3.Q5_1.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q5_1.bin) | Q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
107
- | [puddlejumper-13b.ggmlv3.Q5_K_M.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q5_K_M.bin) | Q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
108
  | [puddlejumper-13b.ggmlv3.Q5_K_S.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q5_K_S.bin) | Q5_K_S | 5 | 9.14 GB| 11.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
 
 
109
  | [puddlejumper-13b.ggmlv3.Q6_K.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q6_K.bin) | Q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
110
  | [puddlejumper-13b.ggmlv3.Q8_0.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q8_0.bin) | Q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
111
 
 
33
 
34
  This repo contains GGML format model files for [Kai Howard's PuddleJumper 13B](https://huggingface.co/totally-not-an-llm/PuddleJumper-13b).
35
 
36
+ ### Important note regarding GGML files.
37
 
38
  The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
39
 
 
95
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
96
  | ---- | ---- | ---- | ---- | ---- | ----- |
97
  | [puddlejumper-13b.ggmlv3.Q2_K.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q2_K.bin) | Q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
 
 
98
  | [puddlejumper-13b.ggmlv3.Q3_K_S.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q3_K_S.bin) | Q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
99
+ | [puddlejumper-13b.ggmlv3.Q3_K_M.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q3_K_M.bin) | Q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
100
+ | [puddlejumper-13b.ggmlv3.Q3_K_L.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q3_K_L.bin) | Q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
101
  | [puddlejumper-13b.ggmlv3.Q4_0.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q4_0.bin) | Q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
 
 
102
  | [puddlejumper-13b.ggmlv3.Q4_K_S.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q4_K_S.bin) | Q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
103
+ | [puddlejumper-13b.ggmlv3.Q4_K_M.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q4_K_M.bin) | Q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
104
+ | [puddlejumper-13b.ggmlv3.Q4_1.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q4_1.bin) | Q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
105
  | [puddlejumper-13b.ggmlv3.Q5_0.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q5_0.bin) | Q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
 
 
106
  | [puddlejumper-13b.ggmlv3.Q5_K_S.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q5_K_S.bin) | Q5_K_S | 5 | 9.14 GB| 11.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
107
+ | [puddlejumper-13b.ggmlv3.Q5_K_M.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q5_K_M.bin) | Q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
108
+ | [puddlejumper-13b.ggmlv3.Q5_1.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q5_1.bin) | Q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
109
  | [puddlejumper-13b.ggmlv3.Q6_K.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q6_K.bin) | Q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
110
  | [puddlejumper-13b.ggmlv3.Q8_0.bin](https://huggingface.co/TheBloke/PuddleJumper-13B-GGML/blob/main/puddlejumper-13b.ggmlv3.Q8_0.bin) | Q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
111