Custom GGUF quants of Hermes-3-Llama-3.1-8B, where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. Enjoy! 🧠🔥🚀 Update: This repo now contains OF32.EF32 GGUF IQuants for even more accuracy. Enjoy! 😋