language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
Llama 2 (4-bit 128g AWQ Quantized)
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format.
This model is a 4-bit 128 group size AWQ quantized model. For more information about AWQ quantization, please click here.
Model Date
July 19, 2023
Model License
Please refer to the original LLaMA 2 model license (link).
Please refer to the AWQ quantization license (link).
CUDA Version
This model was successfully tested on CUDA driver v530.30.02 and runtime v11.7 with Python v3.10.11. Please note that AWQ requires NVIDIA GPUs with compute capability of 8.0
or higher.
For Docker users, the nvcr.io/nvidia/pytorch:23.06-py3
image is runtime v12.1 but otherwise the same as the configuration above and has also been verified to work.
How to Use
git clone https://github.com/abhinavkulkarni/llm-awq \
&& cd llm-awq \
&& git checkout ba01560f21516805fc5ceba5c2566dcbd1cf66d8 \
&& pip install -e . \
&& cd awq/kernels \
&& python setup.py install
import torch
from awq.quantize.quantizer import real_quantize_model_weight
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer, TextStreamer
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
from huggingface_hub import snapshot_download
model_name = "abhinavkulkarni/meta-llama-Llama-2-7b-chat-hf-w4-g128-awq"
# Config
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
# Tokenizer
try:
tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name, trust_remote_code=True)
except:
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, trust_remote_code=True)
streamer = TextStreamer(tokenizer, skip_special_tokens=True)
# Model
w_bit = 4
q_config = {
"zero_point": True,
"q_group_size": 128,
}
load_quant = snapshot_download(model_name)
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config=config,
torch_dtype=torch.float16, trust_remote_code=True)
real_quantize_model_weight(model, w_bit=w_bit, q_config=q_config, init_only=True)
model.tie_weights()
model = load_checkpoint_and_dispatch(model, load_quant, device_map="balanced")
# Inference
prompt = f'''What is the difference between nuclear fusion and fission?
###Response:'''
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda()
output = model.generate(
inputs=input_ids,
temperature=0.7,
max_new_tokens=512,
top_p=0.15,
top_k=0,
repetition_penalty=1.1,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer)
Evaluation
This evaluation was done using LM-Eval.
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
wikitext | 1 | word_perplexity | 12.1967 | ||
byte_perplexity | 1.5964 | ||||
bits_per_byte | 0.6748 |
Llama-2-7b-chat (4-bit 128-group AWQ)
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
wikitext | 1 | word_perplexity | 12.5962 | ||
byte_perplexity | 1.6060 | ||||
bits_per_byte | 0.6835 |
Acknowledgements
The model was quantized with AWQ technique. If you find AWQ useful or relevant to your research, please kindly cite the paper:
@article{lin2023awq,
title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song},
journal={arXiv},
year={2023}
}