|
--- |
|
library_name: transformers |
|
license: llama3.1 |
|
base_model: |
|
- meta-llama/Llama-3.1-70B-Instruct |
|
--- |
|
|
|
# This model has been xMADified! |
|
|
|
This repository contains [`meta-llama/Llama-3.1-70B-Instruct`](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) quantized from 16-bit floats to 4-bit integers, using xMAD.ai proprietary technology. |
|
|
|
# Why should I use this model? |
|
|
|
1. **Accuracy:** This xMADified model is the **best** quantized version of the `meta-llama/Llama-3.1-70B-Instruct` model (40 GB only). See _Table 1_ below for model quality benchmarks. |
|
|
|
2. **Memory-efficiency:** The full-precision model is around 140 GB, while this xMADified model is only around 40 GB, making it feasible to run on one 48 GB GPU. |
|
|
|
3. **Fine-tuning**: These models are fine-tunable over the same reduced (48 GB GPUs) hardware in mere 3-clicks. Watch our product demo [here](https://www.youtube.com/watch?v=S0wX32kT90s&list=TLGGL9fvmJ-d4xsxODEwMjAyNA) |
|
|
|
|
|
## Table 1: xMAD vs. NeuralMagic |
|
|
|
| Model | LAMBADA Standard | LAMBADA OpenAI | MMLU | PIQA | WinoGrande | |
|
|---|---|---|---|---|---| |
|
| [xmadai/Llama-3.1-70B-Instruct-xMADai-INT4](https://huggingface.co/xmadai/Llama-3.1-70B-Instruct-xMADai-INT4) (this model) | **72.70** | **76.07** | **81.75** | **83.41** | **78.53** | |
|
| [neuralmagic/Meta-Llama-3.1-70B-Instruct-quantized.w4a16](https://huggingface.co/neuralmagic/Meta-Llama-3.1-70B-Instruct-quantized.w4a16) | 71.51 | 75.24 | 81.71 | 82.43 | 77.82 | |
|
|
|
|
|
# How to Run Model |
|
|
|
Loading the model checkpoint of this xMADified model requires around 40 GB of VRAM. Hence it can be efficiently run on a single 48 GB GPU. |
|
|
|
**Package prerequisites**: |
|
|
|
1. Run the following *commands to install the required packages. |
|
```bash |
|
pip install torch==2.4.0 # Run following if you have CUDA version 11.8: pip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu118 |
|
pip install transformers accelerate optimum |
|
pip install -vvv --no-build-isolation "git+https://github.com/PanQiWei/AutoGPTQ.git@v0.7.1" |
|
``` |
|
**Sample Inference Code** |
|
```python |
|
from transformers import AutoTokenizer |
|
from auto_gptq import AutoGPTQForCausalLM |
|
model_id = "xmadai/Llama-3.1-70B-Instruct-xMADai-INT4" |
|
prompt = [ |
|
{"role": "system", "content": "You are a helpful assistant, that responds as a pirate."}, |
|
{"role": "user", "content": "What's Deep Learning?"}, |
|
] |
|
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False) |
|
inputs = tokenizer.apply_chat_template( |
|
prompt, |
|
tokenize=True, |
|
add_generation_prompt=True, |
|
return_tensors="pt", |
|
return_dict=True, |
|
).to("cuda") |
|
model = AutoGPTQForCausalLM.from_quantized( |
|
model_id, |
|
device_map='auto', |
|
trust_remote_code=True, |
|
) |
|
outputs = model.generate(**inputs, do_sample=True, max_new_tokens=1024) |
|
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) |
|
``` |
|
|
|
# Citation |
|
|
|
If you found this model useful, please cite our research paper. |
|
|
|
``` |
|
@article{zhang2024leanquant, |
|
title={LeanQuant: Accurate and Scalable Large Language Model Quantization with Loss-error-aware Grid}, |
|
author={Zhang, Tianyi and Shrivastava, Anshumali}, |
|
journal={arXiv preprint arXiv:2407.10032}, |
|
year={2024}, |
|
url={https://arxiv.org/abs/2407.10032}, |
|
} |
|
``` |
|
|
|
# Contact Us |
|
For additional xMADified models, access to fine-tuning, and general questions, please contact us at support@xmad.ai and join our waiting list. |
|
|