hfl
/

Chinese-Alpaca-2-1.3B-GGUF

This repository contains the GGUF-v3 models (llama.cpp compatible) for Chinese-Alpaca-2-1.3B.

Performance

Metric: PPL, lower is better

Quant original imatrix (-im)
Q2_K 19.9339 +/- 0.29752 18.8935 +/- 0.28558
Q3_K 17.2487 +/- 0.27668 17.2950 +/- 0.27994
Q4_0 16.1358 +/- 0.25091 -
Q4_K 16.4583 +/- 0.26453 16.2688 +/- 0.26216
Q4_0 15.9068 +/- 0.25545 -
Q5_K 15.7547 +/- 0.25207 16.0190 +/- 0.25782
Q6_K 15.8166 +/- 0.25359 15.7357 +/- 0.25210
Q8_0 15.7972 +/- 0.25384 -
F16 15.8098 +/- 0.25403 -

The model with -im suffix is generated with important matrix, which has generally better performance (not always though).

Others

For Hugging Face version, please see: https://huggingface.co/hfl/chinese-alpaca-2-1.3b

Please refer to https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/ for more details.

Downloads last month
209
GGUF
Model size
1.26B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Space using hfl/chinese-alpaca-2-1.3b-gguf 1