metadata
base_model: Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1
pipeline_tag: text-generation
library_name: transformers
quantized_by: Khetterman
tags:
- mergekit
- merge
- llama
- llama-3
- llama-3.2
- 3b
- chat
- creative
- conversational
- not-for-all-audiences
language:
- en
- ru
Llama-3.2-Kapusta-JapanChibi-3B-v1 GGUF Quantizations π²
γγγ¦γγ γγγη§γ―ε°γγγ¦ε½Ήγ«η«γ‘γΎγ
I love this model, but I don't understand Japanese, although it is also good in other languages.
This model was converted to GGUF format using llama.cpp.
For more information of the model, see the original model card: Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1.
Available Quantizations (ββΏβ)
Type | Quantized GGUF Model | Size |
---|---|---|
Q4_0 | Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q4_0.gguf | 1.99 GiB |
Q6_K | Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q6_K.gguf | 2.76 GiB |
Q8_0 | Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q8_0.gguf | 3.57 GiB |
My thanks to the authors of the original models, your work is incredible. Have a good time π€