Joseph717171's picture
Update README.md
1400bfa verified
|
raw
history blame
242 Bytes
Custom GGUF quants of Hermes-3-Llama-3.1-8B, where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. Enjoy! 🧠πŸ”₯πŸš€
Update: This repo now contains OF32.EF32 GGUF IQuants for even more accuracy. Enjoy! πŸ˜‹