InferenceIllusionist's picture
Update README.md
1aa81d6 verified
|
raw
history blame
920 Bytes
metadata
base_model: Undi95/Meta-Llama-3.1-8B-Claude
library_name: transformers
quantized_by: InferenceIllusionist
tags:
  - iMat
  - gguf
  - llama3

Meta-Llama-3.1-8B-Claude-iMat-GGUF

Quantized from Meta-Llama-3.1-8B-Claude fp16

  • Weighted quantizations were creating using fp16 GGUF and groups_merged.txt in 88 chunks and n_ctx=512
  • Static fp16 will also be included in repo
  • For a brief rundown of iMatrix quant performance please see this PR
  • All quants are verified working prior to uploading to repo for your safety and convenience

KL-Divergence Reference Chart (Click on image to view in full size)

Original model card can be found here