Badllama 3.1
Collection
3 items
•
Updated
This repo holds weights for Palisade Research's showcase of how open-weight model guardrails can be stripped off in minutes of GPU time. See the Badllama 3 paper for additional background.
Email the authors to request research access. We do not review access requests made on HuggingFace.
main
holds a LoRA adapter, convenient for swapping LoRAsmerged_fp8
holds FBGEMM-FP8 quantized model with LoRA merged in, convenient for long-context inference with vLLMBase model
meta-llama/Llama-3.1-405B