hfl
/

Llama-3-Chinese-8B-GGUF

This repository contains Llama-3-Chinese-8B-GGUF (llama.cpp/ollama/tgw, etc. compatible), which is the quantized version of Llama-3-Chinese-8B.

Note: this is a foundation model, which is not suitable for conversation, QA, etc.

Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3

Performance

Metric: PPL, lower is better

Note: Old models have been removed due to its inferior performance.

Quant Size PPL (old model) ๐Ÿ‘๐Ÿป PPL (new model)
Q2_K 2.96 GB 17.7212 +/- 0.59814 11.8595 +/- 0.20061
Q3_K 3.74 GB 8.6303 +/- 0.28481 5.7559 +/- 0.09152
Q4_0 4.34 GB 8.2513 +/- 0.27102 5.5495 +/- 0.08832
Q4_K 4.58 GB 7.8897 +/- 0.25830 5.3126 +/- 0.08500
Q5_0 5.21 GB 7.7975 +/- 0.25639 5.2222 +/- 0.08317
Q5_K 5.34 GB 7.7062 +/- 0.25218 5.1813 +/- 0.08264
Q6_K 6.14 GB 7.6600 +/- 0.25043 5.1481 +/- 0.08205
Q8_0 7.95 GB 7.6512 +/- 0.25064 5.1350 +/- 0.08190
F16 14.97 GB 7.6389 +/- 0.25001 5.1302 +/- 0.08184

Others

Downloads last month
213
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including hfl/llama-3-chinese-8b-gguf