Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Custom GGUF quant for : failspy/llama-3-70B-Instruct-abliterated-GGUF

IQ4_SR is Optimal (8k ctx) for 36GB VRAM with an IGP displaying the OS.

Without an IGP, the IQ4_XSR is for you.

Downloads last month
84
GGUF
Model size
70.6B params
Architecture
llama
+1
Unable to determine this model's library. Check the docs .

Collection including Nexesenex/llama-3-70B-Instruct-abliterated-CQ-GGUF