File size: 791 Bytes
d959750
d7ac491
4e7126d
 
0db254c
1
2
3
4
5
6
Custom GGUF quants of Hermes-3-Llama-3.1-8B, where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. Enjoy! ๐Ÿง ๐Ÿ”ฅ๐Ÿš€ 

Update: This repo now contains OF32.EF32 GGUF IQuants for even more accuracy. Enjoy! ๐Ÿ˜‹ 

UPDATE: This repo now contains updated O.E.IQuants, which were quantized, using a new F32-imatrix, using llama.cpp version: 4067 (54ef9cfc). This particular version of llama.cpp made it so all K*Q mat_mul computations were done in F32 vs BF16, when using FA (Flash Attention). This change, plus the other very impactful prior change, which made all K*Q mat_muls be computed with F32 (float32) precision for CUDA-Enabled devices, has compoundedly enhanced the O.E.IQuants and has made it excitingly necessary for this update to be pushed. Cheers!