Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
---
|
2 |
base_model: Khetterman/DarkAtom-12B-v3
|
3 |
pipeline_tag: text-generation
|
|
|
4 |
quantized_by: Khetterman
|
5 |
tags:
|
6 |
- mergekit
|
@@ -18,7 +19,7 @@ language:
|
|
18 |
|
19 |
---
|
20 |
# DarkAtom-12B-v3 GGUF Quantizations 🗲
|
21 |
-
|
22 |
|
23 |

|
24 |
|
@@ -43,4 +44,4 @@ For more information of the model, see the original model card: [Khetterman/Dark
|
|
43 |
| Q6_K | [Khetterman/DarkAtom-12B-v3-Q6_K.gguf](https://huggingface.co/Khetterman/DarkAtom-12B-v3-GGUF/blob/main/DarkAtom-12B-v3-Q6_K.gguf) | 9.36 GiB |
|
44 |
| Q8_0 | [Khetterman/DarkAtom-12B-v3-Q8_0.gguf](https://huggingface.co/Khetterman/DarkAtom-12B-v3-GGUF/blob/main/DarkAtom-12B-v3-Q8_0.gguf) | 12.1 GiB |
|
45 |
|
46 |
-
Have a good time 🖤
|
|
|
1 |
---
|
2 |
base_model: Khetterman/DarkAtom-12B-v3
|
3 |
pipeline_tag: text-generation
|
4 |
+
library_name: transformers
|
5 |
quantized_by: Khetterman
|
6 |
tags:
|
7 |
- mergekit
|
|
|
19 |
|
20 |
---
|
21 |
# DarkAtom-12B-v3 GGUF Quantizations 🗲
|
22 |
+
>*Something that shouldn't exist*
|
23 |
|
24 |

|
25 |
|
|
|
44 |
| Q6_K | [Khetterman/DarkAtom-12B-v3-Q6_K.gguf](https://huggingface.co/Khetterman/DarkAtom-12B-v3-GGUF/blob/main/DarkAtom-12B-v3-Q6_K.gguf) | 9.36 GiB |
|
45 |
| Q8_0 | [Khetterman/DarkAtom-12B-v3-Q8_0.gguf](https://huggingface.co/Khetterman/DarkAtom-12B-v3-GGUF/blob/main/DarkAtom-12B-v3-Q8_0.gguf) | 12.1 GiB |
|
46 |
|
47 |
+
>My thanks to the authors of the original models, your work is incredible. Have a good time 🖤
|