aifeifei798
commited on
Commit
•
f294a9e
1
Parent(s):
0ce9724
Upload README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,8 @@ tags:
|
|
13 |
# Special Thanks:
|
14 |
- Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
|
15 |
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF-IQ-Imatrix-Request
|
16 |
-
|
|
|
17 |
# These are my own quantizations (updated almost daily).
|
18 |
The difference with normal quantizations is that I quantize the output and embed tensors to f16.
|
19 |
and the other tensors to 15_k,q6_k or q8_0.
|
|
|
13 |
# Special Thanks:
|
14 |
- Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
|
15 |
- https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF-IQ-Imatrix-Request
|
16 |
+
- mradermacher's superb gguf version, thank you for your conscientious and responsible dedication.
|
17 |
+
- https://huggingface.co/mradermacher/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF
|
18 |
# These are my own quantizations (updated almost daily).
|
19 |
The difference with normal quantizations is that I quantize the output and embed tensors to f16.
|
20 |
and the other tensors to 15_k,q6_k or q8_0.
|