Badllama-3.1-405B / README.md
dmitrii-palisaderesearch's picture
Update README.md
e1dc543 verified
|
raw
history blame
606 Bytes
metadata
base_model: meta-llama/Llama-3.1-405B-Instruct

Badllama-3.1-405B

This repo holds weights for Palisade Research's showcase of how open-weight model guardrails can be stripped off in minutes of GPU time. See the Badllama 3 paper for additional background.

Access

Email the authors to request research access. We do not review access requests made on HuggingFace.

Branches

  • main holds a LoRA adapter, convenient for swapping LoRAs
  • merged_fp8 holds FBGEMM-FP8 quantized model with LoRA merged in, convenient for long-context inference with vLLM