File size: 1,794 Bytes
54dfef5 568c26b 54dfef5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
base_model: Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1
pipeline_tag: text-generation
library_name: transformers
quantized_by: Khetterman
tags:
- mergekit
- merge
- llama
- llama-3
- llama-3.2
- 3b
- chat
- creative
- conversational
- not-for-all-audiences
language:
- en
- ru
---
# Llama-3.2-Kapusta-JapanChibi-3B-v1 GGUF Quantizations π²
>γγγ¦γγ γγγη§γ―ε°γγγ¦ε½Ήγ«η«γ‘γΎγ
>>I love this model, but I don't understand Japanese, although it is also good in other languages.
![Kapusta-JapanChibi-Logo256.png](https://cdn-uploads.huggingface.co/production/uploads/673125091920e70ac26c8a2e/bD3Zv39dUVMQBEn1G8DTM.png)
This model was converted to GGUF format using [llama.cpp](https://github.com/ggerganov/llama.cpp).
For more information of the model, see the original model card: [Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1](https://huggingface.co/Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1).
## Available Quantizations (ββΏβ)
| Type | Quantized GGUF Model | Size |
|--------|----------------------|------|
| Q4_0 | [Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q4_0.gguf](https://huggingface.co/Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-GGUF/blob/main/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q4_0.gguf) | 1.99 GiB |
| Q6_K | [Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q6_K.gguf](https://huggingface.co/Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-GGUF/blob/main/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q6_K.gguf) | 2.76 GiB |
| Q8_0 | [Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q8_0.gguf](https://huggingface.co/Khetterman/Llama-3.2-Kapusta-JapanChibi-3B-v1-GGUF/blob/main/Llama-3.2-Kapusta-JapanChibi-3B-v1-Q8_0.gguf) | 3.57 GiB |
>My thanks to the authors of the original models, your work is incredible. Have a good time π€
|