---
base_model: Undi95/Meta-Llama-3.1-8B-Claude
library_name: transformers
quantized_by: InferenceIllusionist
tags:
- iMat
- gguf
- llama3
---
# Meta-Llama-3.1-8B-Claude-iMat-GGUF
>[!TIP]
>
7/28 Update:
>
>* Reconverted using llama.cpp [b3479](https://github.com/ggerganov/llama.cpp/releases?page=1), adds llama 3.1 rope scaling factors to llama conversion and inference, improving results for context windows above 8192
>* Importance matrix re-calculated with updated fp16 gguf
>* If using Kobold.cpp make sure you are on [v1.71.1](https://github.com/LostRuins/koboldcpp/releases/tag/v1.71.1) or later to take advantage of rope scaling
Quantized from Meta-Llama-3.1-8B-Claude fp16
* Weighted quantizations were creating using fp16 GGUF and groups_merged.txt in 88 chunks and n_ctx=512
* Static fp16 will also be included in repo
* For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
* All quants are verified working prior to uploading to repo for your safety and convenience
KL-Divergence Reference Chart
(Click on image to view in full size)
[](https://i.imgur.com/mV0nYdA.png)
Original model card can be found [here](https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude)