Custom GGUF quants of Hermes-3-Llama-3.2-3B, where the Output Tensors are quantized to Q8_0 or upcast to F32, while the Embeddings are kept at F32. Enjoy! 🧠🔥🚀