--- language: - en - de - fr - it - pt - hi - es - th pipeline_tag: text-generation license: llama3.1 --- # Meta-Llama-3.1-405B-Instruct-quantized.w4a16 ## Model Overview - **Model Architecture:** Meta-Llama-3 - **Input:** Text - **Output:** Text - **Model Optimizations:** - **Weight quantization:** INT4 - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct), this models is intended for assistant-like chat. - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. - **Release Date:** 8/9/2024 - **Version:** 1.0 - **License(s):** Llama3.1 - **Model Developers:** Neural Magic Quantized version of [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct). It achieves an average score of x.x on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves x.x. ### Model Optimizations This model was obtained by quantizing the weights of [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct) to INT4 data type. This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%. Only the weights of the linear operators within transformers blocks are quantized. Symmetric per-channel quantization is applied, in which a linear scaling per output dimension maps the INT4 and floating point representations of the quantized weights. The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library. GPTQ used a 1% damping factor and 512 sequences of 4,096 random tokens. ## Deployment This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below. ```python from vllm import LLM, SamplingParams from transformers import AutoTokenizer model_id = "neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w4a16" number_gpus = 8 max_model_len = 4096 sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256) tokenizer = AutoTokenizer.from_pretrained(model_id) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) llm = LLM(model=model_id, tensor_parallel_size=number_gpus, max_model_len=max_model_len) outputs = llm.generate(prompts, sampling_params) generated_text = outputs[0].outputs[0].text print(generated_text) ``` vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. ## Creation This model was created by using the [llm-compressor](https://github.com/vllm-project/llm-compressor) library as presented in the code snipet below. ```python from transformers import AutoTokenizer from datasets import Dataset from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot from llmcompressor.modifiers.quantization import GPTQModifier import random model_id = "meta-llama/Meta-Llama-3.1-405B-Instruct" num_samples = 512 max_seq_len = 8192 tokenizer = AutoTokenizer.from_pretrained(model_id) preprocess_fn = lambda example: {"text": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n{text}".format_map(example)} dataset_name = "neuralmagic/LLM_compression_calibration" dataset = load_dataset(dataset_name, split="train") ds = dataset.shuffle().select(range(num_samples)) ds = ds.map(preprocess_fn) recipe = GPTQModifier( targets="Linear", scheme="W4A16", ignore=["lm_head"], dampening_frac=0.01, ) model = SparseAutoModelForCausalLM.from_pretrained( model_id, device_map="auto", trust_remote_code=True, ) oneshot( model=model, dataset=ds, recipe=recipe, max_seq_length=max_seq_len, num_calibration_samples=num_samples, ) model.save_pretrained("Meta-Llama-3.1-405B-Instruct-quantized.w4a16") ``` ## Evaluation The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command: ``` lm_eval \ --model vllm \ --model_args pretrained="neuralmagic/Meta-Llama-3.1-405B-Instruct-quantized.w4a16",dtype=auto,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1 \ --tasks openllm \ --batch_size auto ``` ### Accuracy #### Open LLM Leaderboard evaluation scores
Benchmark | Meta-Llama-3.1-405B-Instruct | Meta-Llama-3.1-405B-Instruct-quantized.w4a16 (this model) | Recovery (this model) |
MMLU (5-shot) | xx.xx | xx.xx | xx.xx% |
ARC Challenge (0-shot) | 96.93 | 95.39 | 98.41% |
GSM-8K (CoT, 8-shot, strict-match) | 96.44 | 95.83 | 99.36% |
Hellaswag (10-shot) | xx.xx | xx.xx% | |
Winogrande (5-shot) | xx.xx | xx.xx% | |
TruthfulQA (0-shot) | xx.xx | xx.xx | xx.xx% |
Average | xx.xx | xx.xx | xx.xx% |