Quantization Aware Training?
"Along with the raw checkpoints, we also provide
quantized versions of our models in different standard formats."
"Based
on the most popular open source quantization
inference engines (e.g. llama.cpp), we focus on
three weight representations: per-channel int4,
per-block int4, and switched fp8."
Hi gemma team, these specially provided weights, are they suited for the Q4_0 quantization mix, the Q4_1 quantization mix, or other quantized tensor types (q4_K), like for the Q4_K_M quantization mix?
From my understanding, some of these preserve more data than required for better fp16 approximation. So then we'd be better off choosing the most efficient one. Have the QAT models been benchmarked for the GGML formats? I know that there are some great perplexity tests provided in llama.cpp, but have you been able to eval with https://github.com/EleutherAI/lm-evaluation-harness?
I am curious to know if there is a drop in quality, especially with the smallest models.