metadata
base_model: Khetterman/DarkAtom-12B-v3
pipeline_tag: text-generation
library_name: transformers
quantized_by: Khetterman
tags:
- mergekit
- merge
- 12b
- chat
- creative
- roleplay
- conversational
- creative-writing
- not-for-all-audiences
language:
- en
- ru
DarkAtom-12B-v3 GGUF Quantizations π²
Something that shouldn't exist.
This model was converted to GGUF format using llama.cpp.
For more information of the model, see the original model card: Khetterman/DarkAtom-12B-v3.
Available Quantizations (ββΏβ)
Type | Quantized GGUF Model | Size |
---|---|---|
Q2_K | Khetterman/DarkAtom-12B-v3-Q2_K.gguf | 4.46 GiB |
Q3_K_S | Khetterman/DarkAtom-12B-v3-Q3_K_S.gguf | 5.15 GiB |
Q3_K_M | Khetterman/DarkAtom-12B-v3-Q3_K_M.gguf | 5.66 GiB |
Q3_K_L | Khetterman/DarkAtom-12B-v3-Q3_K_L.gguf | 6.11 GiB |
Q4_0 | Khetterman/DarkAtom-12B-v3-Q4_0.gguf | 6.58 GiB |
Q4_K_S | Khetterman/DarkAtom-12B-v3-Q4_K_S.gguf | 6.63 GiB |
Q4_K_M | Khetterman/DarkAtom-12B-v3-Q4_K_M.gguf | 6.96 GiB |
Q4_1 | Khetterman/DarkAtom-12B-v3-Q4_1.gguf | 7.25 GiB |
Q5_K_S | Khetterman/DarkAtom-12B-v3-Q5_K_S.gguf | 7.93 GiB |
Q5_K_M | Khetterman/DarkAtom-12B-v3-Q5_K_M.gguf | 8.12 GiB |
Q6_K | Khetterman/DarkAtom-12B-v3-Q6_K.gguf | 9.36 GiB |
Q8_0 | Khetterman/DarkAtom-12B-v3-Q8_0.gguf | 12.1 GiB |
My thanks to the authors of the original models, your work is incredible. Have a good time π€