kaiokendev commited on
Commit
165eb11
1 Parent(s): fa51f55

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -25,6 +25,8 @@ It uses a mixture of the following datasets:
25
  ### Merged Models
26
  #### 30B
27
  - GGML 30B 4-bit: [https://huggingface.co/gozfarb/llama-30b-supercot-ggml](https://huggingface.co/gozfarb/llama-30b-supercot-ggml)
 
 
28
  - 30B (unquantized): [https://huggingface.co/ausboss/llama-30b-supercot](https://huggingface.co/ausboss/llama-30b-supercot)
29
  - 30B 4-bit 128g CUDA: [https://huggingface.co/tsumeone/llama-30b-supercot-4bit-128g-cuda](https://huggingface.co/tsumeone/llama-30b-supercot-4bit-128g-cuda)
30
  - 30B 4-bit 128g TRITON: N/A
@@ -32,6 +34,9 @@ It uses a mixture of the following datasets:
32
 
33
  #### 13B
34
  - GGML 13B 4-bit: [https://huggingface.co/gozfarb/llama-13b-supercot-ggml](https://huggingface.co/gozfarb/llama-13b-supercot-ggml)
 
 
 
35
  - 13B (unquantized): [https://huggingface.co/ausboss/llama-13b-supercot](https://huggingface.co/ausboss/llama-13b-supercot)
36
  - 13B 4-bit 128g CUDA: [https://huggingface.co/ausboss/llama-13b-supercot-4bit-128g](https://huggingface.co/ausboss/llama-13b-supercot-4bit-128g)
37
  - 13B 4-bit 128g TRITON: [https://huggingface.co/TheYuriLover/llama-13b-SuperCOT-4bit-TRITON](https://huggingface.co/TheYuriLover/llama-13b-SuperCOT-4bit-TRITON)
 
25
  ### Merged Models
26
  #### 30B
27
  - GGML 30B 4-bit: [https://huggingface.co/gozfarb/llama-30b-supercot-ggml](https://huggingface.co/gozfarb/llama-30b-supercot-ggml)
28
+ - GGML 30B Q4_3: [https://huggingface.co/camelids/llama-33b-supercot-ggml-q4_3](https://huggingface.co/camelids/llama-33b-supercot-ggml-q4_3)
29
+ - GGML 30B Q5_1: [https://huggingface.co/camelids/llama-33b-supercot-ggml-q5_1](https://huggingface.co/camelids/llama-33b-supercot-ggml-q5_1)
30
  - 30B (unquantized): [https://huggingface.co/ausboss/llama-30b-supercot](https://huggingface.co/ausboss/llama-30b-supercot)
31
  - 30B 4-bit 128g CUDA: [https://huggingface.co/tsumeone/llama-30b-supercot-4bit-128g-cuda](https://huggingface.co/tsumeone/llama-30b-supercot-4bit-128g-cuda)
32
  - 30B 4-bit 128g TRITON: N/A
 
34
 
35
  #### 13B
36
  - GGML 13B 4-bit: [https://huggingface.co/gozfarb/llama-13b-supercot-ggml](https://huggingface.co/gozfarb/llama-13b-supercot-ggml)
37
+ - GGML 13B Q4_3: [https://huggingface.co/camelids/llama-13b-supercot-ggml-q4_3](https://huggingface.co/camelids/llama-13b-supercot-ggml-q4_3)
38
+ - GGML 13B Q5_1: [https://huggingface.co/camelids/llama-13b-supercot-ggml-q5_1](https://huggingface.co/camelids/llama-13b-supercot-ggml-q5_1)
39
+ - GGML 13B Q8_0: [https://huggingface.co/camelids/llama-13b-supercot-ggml-q8_0](https://huggingface.co/camelids/llama-13b-supercot-ggml-q8_0)
40
  - 13B (unquantized): [https://huggingface.co/ausboss/llama-13b-supercot](https://huggingface.co/ausboss/llama-13b-supercot)
41
  - 13B 4-bit 128g CUDA: [https://huggingface.co/ausboss/llama-13b-supercot-4bit-128g](https://huggingface.co/ausboss/llama-13b-supercot-4bit-128g)
42
  - 13B 4-bit 128g TRITON: [https://huggingface.co/TheYuriLover/llama-13b-SuperCOT-4bit-TRITON](https://huggingface.co/TheYuriLover/llama-13b-SuperCOT-4bit-TRITON)