hfl
/

Edit model card

Llama-3-Chinese-8B-Instruct-v3-GGUF

[๐Ÿ‘‰๐Ÿ‘‰๐Ÿ‘‰ Chat with Llama-3-Chinese-8B-Instruct-v3 @ HF Space]

This repository contains Llama-3-Chinese-8B-Instruct-v3-GGUF (llama.cpp/ollama/tgw, etc. compatible), which is the quantized version of Llama-3-Chinese-8B-Instruct-v3.

Note: this is an instruction (chat) model, which can be used for conversation, QA, etc.

Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3

Performance

Metric: PPL, lower is better

Note: Unless constrained by memory, we suggest using Q8_0 or Q6_K for better performance.

Quant Size PPL
Q2_K 2.96 GB 10.0534 +/- 0.13135
Q3_K 3.74 GB 6.3295 +/- 0.07816
Q4_0 4.34 GB 6.3200 +/- 0.07893
Q4_K 4.58 GB 6.0042 +/- 0.07431
Q5_0 5.21 GB 6.0437 +/- 0.07526
Q5_K 5.34 GB 5.9484 +/- 0.07399
Q6_K 6.14 GB 5.9469 +/- 0.07404
Q8_0 7.95 GB 5.8933 +/- 0.07305
F16 14.97 GB 5.8902 +/- 0.07303

Others

Downloads last month
7,216
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for hfl/llama-3-chinese-8b-instruct-v3-gguf

Quantized
(7)
this model

Collection including hfl/llama-3-chinese-8b-instruct-v3-gguf