This model has been xMADified!
This repository contains meta-llama/Meta-Llama-3.1-8B-Instruct
quantized from 16-bit floats to 4-bit integers, using xMAD.ai proprietary technology.
How to Run Model
Loading the model checkpoint of this xMADified model requires less than 5.5 GiB of VRAM. Hence it can be efficiently run on many laptop GPUs.
Package prerequisites: Run the following commands to install the required packages.
pip install -q --upgrade transformers accelerate optimum
pip install -q --no-build-isolation auto-gptq
Sample Inference Code
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
model_id = "xmadai/Llama-3.1-8B-Instruct-xMADai-4bit"
prompt = [
{"role": "system", "content": "You are a helpful assistant, that responds as a pirate."},
{"role": "user", "content": "What's Deep Learning?"},
]
tokenizer = AutoTokenizer.from_pretrained(model_id)
inputs = tokenizer.apply_chat_template(
prompt,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
).to("cuda")
model = AutoGPTQForCausalLM.from_quantized(
model_id,
device_map='auto',
trust_remote_code=True,
)
outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
Model Quality
We report the zero-shot accuracy of this xMADified model on popular benchmarks below. The results are obtained using lm-evaluation-harness
.
Model | Arc Challenge | Arc Easy | LAMBADA OpenAI | LAMBADA Standard | MMLU | HellaSwag | WinoGrande | PIQA |
---|---|---|---|---|---|---|---|---|
xMADified Llama-3.1-8B-Instruct | 51.71 | 80.09 | 67.67 | 58.66 | 61.49 | 54.18 | 69.77 | 78.78 |
Other xMADified models and their GPU memory requirements are listed below.
For additional xMADified models, access to fine-tuning, and general questions, please contact us at support@xmad.ai and join our waiting list.
- Downloads last month
- 21
Model tree for xmadai/Llama-3.1-8B-Instruct-xMADai-4bit
Base model
meta-llama/Llama-3.1-8B