hfl
/

hfl-rc's picture
Update README.md
6188611 verified
|
raw
history blame
2.02 kB
---
license: apache-2.0
language:
- zh
- en
---
# Llama-3-Chinese-8B-Instruct-GGUF
## Warning: llama.cpp has [breaking changes on Llama-3 pre-tokenizer](https://github.com/ggerganov/llama.cpp/pull/6920), which significantly affect performance. We will update GGUF mdoels in the next few hours.
<p align="center">
<a href="https://github.com/ymcui/Chinese-LLaMA-Alpaca-3"><img src="https://ymcui.com/images/chinese-llama-alpaca-3-banner.png" width="600"/></a>
</p>
This repository contains **Llama-3-Chinese-8B-Instruct-GGUF** (llama.cpp/ollama/tgw, etc. compatible), which is the quantized version of [Llama-3-Chinese-8B-Instruct](https://huggingface.co/hfl/llama-3-chinese-8b-instruct).
**Note: this is an instruction (chat) model, which can be used for conversation, QA, etc.**
Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
## Performance
Metric: PPL, lower is better
| Quant | Size | PPL | PPL (`-im`) |
| :---: | -------: | ------------------: | -----------------: |
| Q2_K | 2.96 GB | 10.3918 +/- 0.13288 | **9.1722 +/- 0.11502** |
| Q3_K | 3.74 GB | 6.3018 +/- 0.07849 | **6.1901 +/- 0.07734** |
| Q4_0 | 4.34 GB | 6.0628 +/- 0.07501 | **5.9623 +/- 0.07444** |
| Q4_K | 4.58 GB | 5.9066 +/- 0.07419 | **5.8847 +/- 0.07406** |
| Q5_0 | 5.21 GB | 5.8562 +/- 0.07355 | **5.8032 +/- 0.07287** |
| Q5_K | 5.34 GB | 5.8062 +/- 0.07331 | **5.8058 +/- 0.07329** |
| Q6_K | 6.14 GB | 5.7757 +/- 0.07298 | **5.7745 +/- 0.07287** |
| Q8_0 | 7.95 GB | 5.7626 +/- 0.07272 | 5.7626 +/- 0.07272 |
| F16 | 14.97 GB | 5.7628 +/- 0.07275 | N/A |
## Others
- For full model, please see: https://huggingface.co/hfl/llama-3-chinese-8b-instruct
- For LoRA-only model, please see: https://huggingface.co/hfl/llama-3-chinese-8b-instruct-lora
- If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-LLaMA-Alpaca-3