YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Custom GGUF quants of arcee-aiโs Llama-3.1-SuperNova-Lite-8B, where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. Enjoy! ๐ง ๐ฅ๐
UPDATE: This repo now contains updated O.E.IQuants, which were quantized, using a new F32-imatrix, using llama.cpp version: 4067 (54ef9cfc). This particular version of llama.cpp made it so all KQ mat_mul computations were done in F32 vs BF16, when using FA (Flash Attention). This change, plus the other very impactful prior change, which made all KQ mat_muls be computed with F32 (float32) precision for CUDA-Enabled devices, has compoundedly enhanced the O.E.IQuants and has made it excitingly necessary for this update to be pushed. Cheers!
- Downloads last month
- 673
Collection including Joseph717171/Llama-3.1-SuperNova-Lite-8.0B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Collection
Custom GGUF quants of Llama-3.1-8B-Instruct fine-tunes, where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. ๐ง ๐ฅ๐
โข
3 items
โข
Updated