File size: 31,232 Bytes
e62a329 9ba3eef e62a329 9ba3eef |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 |
---
library_name: transformers
tags:
- aqlm
base_model:
- deepseek-ai/deepseek-coder-6.7b-base
base_model_relation: quantized
---
# Quantizing Large Language Models for Code Generation: A Differentiated Replication
## Table of Contents
1. [Introduction](#1-introduction)
2. [Model details](#2-model-details)
3. [Experiments](#3-experiments)
4. [Replication](#4-replication)
## 1. Introduction
HuggingFace repository containing the quantized models from the paper _"Quantizing Large Language Models for Code Generation: A Differentiated Replication."_.
In this study, we evaluate the performance of compressed Deep Learning models on the code generation task. Specifically, we quantize code models such as CodeLlama and DeepSeek Coder at different levels of precision, namely 8, 4, 3, and 2 bits per model parameter, using a SOTA quantization technique for extreme model compression, that is [AQLM](https://github.com/Vahe1994/AQLM) (Additive Quantization of Language Models).
## 2. Model details
The complete list of models used in this study is available in our [model collection](https://huggingface.co/collections/Devy1/quantization-for-code-generation-67c9b83b34ed9a5a84fb714d), which is organized by order of appearance in the paper discussion.
More specifically, we named the models as follows:
**\<base-model\>**-AQLM-**\<precision\>**-**\<calibration\>**-**\<finetuned?\>**-**\<hyperparameters\>**
- **\<base-model\>**: define the starting model that was used for quantization.
- **\<precision\>**: the average number of bits per model weight.
- **\<calibration\>**: the type of calibration performed. It can be 'rnd' (random), 'code' (code-specific), and 'mixed' (using both code and technical language).
- **\<finetuned?\>**: if the model was fine-tuned after quantization, this tag will appear as "-finetuned". Otherwise it will not be present.
- **\<hyperparameters\>**: number of codebooks and codebook size used for quantization. Expressed in the format **\<codebooks\>**x**\<bits\>**.
For example, the model **Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15** has the following features:
1. This model is a compressed version of [CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf).
2. On average, each parameter is represented by **2 bits**.
3. We used a (**random**) sample of the RedPajama dataset for the calibration process.
4. The model was **not fine-tuned** after quantization (because the -finetuned tag does not appear after the calibration dataset type).
5. We used **1** codebook of **15** bits to quantize the model. The default group size used for each model is 8.
More information about the quantization process and hyperparameters can be found in our paper and in the config.json file from this repository.
## 3. Experiments
Below, we present the code generation performance of each quantized model across different experiments. Performance is computed on Python and Java languages using [MultiPL-E](https://github.com/nuprl/MultiPL-E) and [McEval](https://mceval.github.io/) benchmarks. More details on the research approach can be found in our paper.
Results are listed by research question and benchmark. By clicking on the "precision" value, you will be redirected to the corresponding model.
### RQ1. How does low-bit quantization affect the model’s code generation ability?
#### MultiPL-E benchmark
| Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
|----------------------|--------:|-------------------------|-----------:|---------------:|-------------:|
| CodeLlama - Base | 7B | [Float16](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 GB | 29.8 | 32.2 |
| | | [8-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-rnd-4x15) | 7.47 GB | 29.7 | 31.6 |
| | | [4-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-rnd-2x15) | 4.00 GB | 29.1 | 30.7 |
| | | [3-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-2x12) | 3.80 GB | 24.3 | 26.5 |
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15) | 2.26 GB | 16.4 | 14.1 |
| DeepSeek-Coder - Base| 7B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 GB | 45.8 | 41.4 |
| | | [8-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-rnd-4x15) | 7.48 GB | 46.2 | 41.9 |
| | | [4-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-rnd-2x15) | 4.00 GB | 45.2 | 41.4 |
| | | [3-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-2x12) | 3.80 GB | 41.1 | 37.7 |
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-1x15) | 2.27 GB | 27.6 | 23.2 |
#### McEval benchmark
| Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
|----------------------|--------:|-------------------------|-----------:|---------------:|-------------:|
| CodeLlama - Base | 7B | [Float16](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 GB | 12.9 | 29.3 |
| | | [8-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-rnd-4x15) | 7.47 GB | 12.9 | 29.2 |
| | | [4-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-rnd-2x15) | 4.00 GB | 15.2 | 25.3 |
| | | [3-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-2x12) | 3.80 GB | 10.0 | 21.3 |
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15) | 2.26 GB | 5.6 | 11.4 |
| DeepSeek-Coder - Base| 7B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 GB | 41.8 | 42.6 |
| | | [8-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-rnd-4x15) | 7.48 GB | 42.5 | 42.8 |
| | | [4-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-rnd-2x15) | 4.00 GB | 40.7 | 45.9 |
| | | [3-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-2x12) | 3.80 GB | 36.2 | 34.5 |
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-1x15) | 2.27 GB | 13.7 | 23.6 |
### RQ1. Impact of end-to-end fine-tuning after quantization
#### MultiPL-E benchmark
| Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
|----------------------|--------:|-------------------------|-----------:|---------------:|-------------:|
| CodeLlama - Base | 7B | [Float16](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 GB | 29.8 | 32.2 |
| | | [3-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-2x12) | 3.80 GB | 24.3 | 26.5 |
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15) | 2.26 GB | 16.4 | 14.1 |
| | | [3-bit + Fine-tuning](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-finetuned-2x12) | 3.80 GB | <span style="color:red;">▼</span> **24.0** | <span style="color:green;">▲</span> **27.8** |
| | | [2-bit + Fine-tuning](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-finetuned-1x15) | 2.26 GB | <span style="color:green;">▲</span> **19.9** | <span style="color:green;">▲</span> **19.0** |
| DeepSeek-Coder - Base| 7B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 GB | 45.8 | 41.4 |
| | | [3-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-2x12) | 3.80 GB | 41.1 | 37.7 |
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-1x15) | 2.27 GB | 27.6 | 23.2 |
| | | [3-bit + Fine-tuning](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-finetuned-2x12) | 3.80 GB | <span style="color:green;">▲</span> **41.8** | <span style="color:red;">▼</span> **37.7** |
| | | [2-bit + Fine-tuning](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-finetuned-1x15) | 2.27 GB | <span style="color:green;">▲</span> **33.0** | <span style="color:green;">▲</span> **26.8** |
#### McEval benchmark
| Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
|----------------------|--------:|-------------------------|-----------:|---------------:|-------------:|
| CodeLlama - Base | 7B | [Float16](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 GB | 12.9 | 29.3 |
| | | [3-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-2x12) | 3.80 GB | 10.0 | 21.3 |
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15) | 2.26 GB | 5.6 | 11.4 |
| | | [3-bit + Fine-tuning](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-finetuned-2x12) | 3.80 GB | <span style="color:green;">▲</span> **10.8** | <span style="color:green;">▲</span> **22.0** |
| | | [2-bit + Fine-tuning](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-finetuned-1x15) | 2.26 GB | <span style="color:green;">▲</span> **7.6** | <span style="color:green;">▲</span> **14.3** |
| DeepSeek-Coder - Base| 7B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 GB | 41.8 | 42.6 |
| | | [3-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-2x12) | 3.80 GB | 36.2 | 34.5 |
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-1x15) | 2.27 GB | 13.7 | 23.6 |
| | | [3-bit + Fine-tuning](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-finetuned-2x12) | 3.80 GB | <span style="color:red;">▼</span> **35.6** | <span style="color:red;">▼</span> **32.4** |
| | | [2-bit + Fine-tuning](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-finetuned-1x15) | 2.27 GB | <span style="color:green;">▲</span> **20.2** | <span style="color:green;">▲</span> **27.0** |
### RQ2. Which impact does the calibration dataset have on model performance?
#### MultiPL-E benchmark
| Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
|----------------------|--------|-------------------------|----------:|--------------:|------------:|
| CodeLlama - Base | 7B | [Float16 - Baseline](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 GB | 29.8 | 32.2 |
| | | [8-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-rnd-4x15) | 7.47 GB | 29.7 | 31.6 |
| | | [8-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-mixed-4x15) | 7.47 GB | <span style="color:red;">▼</span> 29.7 | <span style="color:green;">▲</span> 32.3 |
| | | [8-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-code-4x15) | 7.47 GB | <span style="color:red;">▼</span> 29.2 | <span style="color:green;">▲</span> 32.0 |
| | | [4-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-rnd-2x15) | 4.00 GB | 29.1 | 30.7 |
| | | [4-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-mixed-2x15) | 4.00 GB | <span style="color:red;">▼</span> 29.0 | <span style="color:green;">▲</span> 31.4 |
| | | [4-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-code-2x15) | 4.00 GB | <span style="color:green;">▲</span> 30.2 | <span style="color:red;">▼</span> 29.8 |
| | | [3-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-2x12) | 3.80 GB | 24.3 | 26.5 |
| | | [3-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-mixed-2x12) | 3.80 GB | <span style="color:green;">▲</span> 28.2 | <span style="color:green;">▲</span> 28.4 |
| | | [3-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-code-2x12) | 3.80 GB | <span style="color:green;">▲</span> 27.0 | <span style="color:green;">▲</span> 28.0 |
| | | [2-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15) | 2.26 GB | 16.4 | 14.1 |
| | | [2-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-mixed-1x15) | 2.26 GB | <span style="color:green;">▲</span> 23.9 | <span style="color:green;">▲</span> 21.5 |
| | | [2-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-code-1x15) | 2.26 GB | <span style="color:green;">▲</span> 24.1 | <span style="color:green;">▲</span> 19.4 |
| DeepSeek-Coder - Base| 7B | [Float16 - Baseline](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 GB | 45.8 | 41.4 |
| | | [8-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-rnd-4x15) | 7.48 GB | 46.2 | 41.9 |
| | | [8-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-mixed-4x15) | 7.48 GB | <span style="color:red;">▼</span> 45.4 | <span style="color:green;">▲</span> 43.2 |
| | | [8-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-code-4x15) | 7.48 GB | <span style="color:red;">▼</span> 45.9 | <span style="color:red;">▼</span> 41.7 |
| | | [4-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-rnd-2x15) | 4.00 GB | 45.2 | 41.4 |
| | | [4-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-mixed-2x15) | 4.00 GB | <span style="color:red;">▼</span> 44.5 | <span style="color:green;">▲</span> 41.8 |
| | | [4-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-code-2x15) | 4.00 GB | <span style="color:red;">▼</span> 44.2 | <span style="color:red;">▼</span> 40.6 |
| | | [3-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-2x12) | 3.80 GB | 41.1 | 37.7 |
| | | [3-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-mixed-2x12) | 3.80 GB | <span style="color:green;">▲</span> 43.7 | <span style="color:green;">▲</span> 39.1 |
| | | [3-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-code-2x12) | 3.80 GB | <span style="color:green;">▲</span> 42.5 | <span style="color:green;">▲</span> 38.7 |
| | | [2-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-1x15) | 2.27 GB | 27.6 | 23.2 |
| | | [2-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-mixed-1x15) | 2.27 GB | <span style="color:green;">▲</span> 35.7 | <span style="color:green;">▲</span> 27.4 |
| | | [2-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-code-1x15) | 2.27 GB | <span style="color:green;">▲</span> 34.8 | <span style="color:green;">▲</span> 27.5 |
#### McEval benchmark
| Model | Params | Precision | Size | Python pass@1 | Java pass@1 |
|----------------------|--------|-------------------------|----------:|--------------:|------------:|
| CodeLlama - Base | 7B | [Float16 - Baseline](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 GB | 12.9 | 29.3 |
| | | [8-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-rnd-4x15) | 7.47 GB | 12.9 | 29.2 |
| | | [8-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-mixed-4x15) | 7.47 GB | <span style="color:green;">▲</span> 13.7 | <span style="color:red;">▼</span> 28.6 |
| | | [8-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-8bit-code-4x15) | 7.47 GB | <span style="color:red;">▼</span> 12.3 | <span style="color:green;">▲</span> 29.5 |
| | | [4-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-rnd-2x15) | 4.00 GB | 15.2 | 25.3 |
| | | [4-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-mixed-2x15) | 4.00 GB | <span style="color:red;">▼</span> 13.0 | <span style="color:green;">▲</span> 30.3 |
| | | [4-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-4bit-code-2x15) | 4.00 GB | <span style="color:red;">▼</span> 11.1 | <span style="color:green;">▲</span> 25.8 |
| | | [3-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-rnd-2x12) | 3.80 GB | 10.0 | 21.3 |
| | | [3-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-mixed-2x12) | 3.80 GB | <span style="color:green;">▲</span> 12.3 | <span style="color:green;">▲</span> 25.5 |
| | | [3-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-3bit-code-2x12) | 3.80 GB | <span style="color:green;">▲</span> 10.8 | <span style="color:red;">▼</span> 19.9 |
| | | [2-bit with Random samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-rnd-1x15) | 2.26 GB | 5.6 | 11.4 |
| | | [2-bit with Mixed samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-mixed-1x15) | 2.26 GB | <span style="color:green;">▲</span> 11.1 | <span style="color:green;">▲</span> 12.8 |
| | | [2-bit with Code samples](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-code-1x15) | 2.26 GB | <span style="color:green;">▲</span> 6.1 | <span style="color:green;">▲</span> 12.8 |
| DeepSeek-Coder - Base| 7B | [Float16 - Baseline](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 GB | 41.8 | 42.6 |
| | | [8-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-rnd-4x15) | 7.48 GB | 42.5 | 42.8 |
| | | [8-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-mixed-4x15) | 7.48 GB | <span style="color:green;">▲</span> 42.7 | <span style="color:red;">▼</span> 42.5 |
| | | [8-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-8bit-code-4x15) | 7.48 GB | <span style="color:red;">▼</span> 41.3 | <span style="color:red;">▼</span> 42.7 |
| | | [4-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-rnd-2x15) | 4.00 GB | 40.7 | 45.9 |
| | | [4-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-mixed-2x15) | 4.00 GB | <span style="color:red;">▼</span> 39.0 | <span style="color:red;">▼</span> 42.8 |
| | | [4-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-4bit-code-2x15) | 4.00 GB | <span style="color:red;">▼</span> 39.8 | <span style="color:green;">▲</span> 46.3 |
| | | [3-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-rnd-2x12) | 3.80 GB | 36.2 | 34.5 |
| | | [3-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-mixed-2x12) | 3.80 GB | <span style="color:red;">▼</span> 35.5 | <span style="color:green;">▲</span> 42.8 |
| | | [3-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-3bit-code-2x12) | 3.80 GB | <span style="color:green;">▲</span> 36.5 | <span style="color:green;">▲</span> 45.6 |
| | | [2-bit with Random samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-rnd-1x15) | 2.27 GB | 13.7 | 23.6 |
| | | [2-bit with Mixed samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-mixed-1x15) | 2.27 GB | <span style="color:green;">▲</span> 26.2 | <span style="color:green;">▲</span> 29.1 |
| | | [2-bit with Code samples](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-code-1x15) | 2.27 GB | <span style="color:green;">▲</span> 24.6 | <span style="color:green;">▲</span> 28.0 |
### RQ3. How does extreme quantization affect model accuracy across different model sizes?
#### MultiPL-E benchmark
| Model | Params | Precision | Size (GB) | Python pass@1 | Dec (%) | Java pass@1 | Dec (%) |
|----------------------|--------|-------------------------|----------:|--------------:|--------:|------------:|--------:|
| CodeLlama - Base | 7B | [Float16](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 | 29.8 | --- | 32.2 | --- |
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-mixed-1x15) | 2.26 | 23.9 | -19.8 | 21.5 | -33.2 |
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-mixed-finetuned-1x15) | 2.26 | 25.5 | -14.4 | 26.5 | -17.7 |
| | 13B | [Float16](https://huggingface.co/codellama/CodeLlama-13b-hf) | 24.25 | 34.3 | --- | 38.3 | --- |
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-13b-hf-AQLM-2bit-mixed-1x15) | 3.98 | 30.9 | -9.9 | 27.7 | -27.7 |
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/CodeLlama-13b-hf-AQLM-2bit-mixed-finetuned-1x15) | 3.98 | 30.1 | -12.2 | 32.8 | -14.4 |
| | 34B | [Float16](https://huggingface.co/codellama/CodeLlama-34b-hf) | 62.74 | 41.9 | --- | 44.1 | --- |
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-34b-hf-AQLM-2bit-mixed-1x15) | 9.54 | 37.1 | -11.5 | 32.7 | -25.9 |
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/CodeLlama-34b-hf-AQLM-2bit-mixed-finetuned-1x15) | 9.54 | 36.0 | -14.1 | 36.1 | -18.1 |
| DeepSeek-Coder - Base| 1B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) | 2.57 | 28.4 | --- | 28.8 | --- |
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-1.3b-base-AQLM-2bit-mixed-1x14) | 0.61 | 13.9 | -51.1 | 6.6 | -77.1 |
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/DeepSeek-Coder-1.3b-base-AQLM-2bit-mixed-finetuned-1x14) | 0.61 | 21.7 | -23.6 | 14.7 | -49.0 |
| | 7B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 | 45.8 | --- | 41.4 | --- |
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-mixed-1x15) | 2.27 | 35.7 | -22.1 | 27.4 | -33.8 |
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-mixed-finetuned-1x15) | 2.27 | 36.4 | -20.5 | 32.8 | -20.8 |
| | 33B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-33b-base) | 62.16 | 52.1 | --- | 47.3 | --- |
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-33b-base-AQLM-2bit-mixed-1x15) | 9.38 | 43.4 | -16.7 | 34.5 | -27.1 |
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/DeepSeek-Coder-33b-base-AQLM-2bit-mixed-finetuned-1x15) | 9.38 | 43.0 | -17.5 | 38.7 | -18.2 |
#### McEval benchmark
| Model | Params | Precision | Size (GB) | Python pass@1 | Dec (%) | Java pass@1 | Dec (%) |
|----------------------|--------|-------------------------|----------:|--------------:|--------:|------------:|--------:|
| CodeLlama - Base | 7B | [Float16](https://huggingface.co/codellama/CodeLlama-7b-hf) | 13.48 | 12.9 | --- | 29.3 | --- |
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-mixed-1x15) | 2.26 | 11.1 | -14.0 | 12.8 | -56.3 |
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/CodeLlama-7b-hf-AQLM-2bit-mixed-finetuned-1x15) | 2.26 | 13.0 | -0.8 | 18.3 | -37.5 |
| | 13B | [Float16](https://huggingface.co/codellama/CodeLlama-13b-hf) | 24.25 | 18.9 | --- | 40.9 | --- |
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-13b-hf-AQLM-2bit-mixed-1x15) | 3.98 | 9.4 | -50.3 | 22.3 | -45.5 |
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/CodeLlama-13b-hf-AQLM-2bit-mixed-finetuned-1x15) | 3.98 | 10.4 | -45.0 | 27.8 | -32.0 |
| | 34B | [Float16](https://huggingface.co/codellama/CodeLlama-34b-hf) | 62.74 | 29.0 | --- | 39.2 | --- |
| | | [2-bit](https://huggingface.co/Devy1/CodeLlama-34b-hf-AQLM-2bit-mixed-1x15) | 9.54 | 17.6 | -39.3 | 25.2 | -35.7 |
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/CodeLlama-34b-hf-AQLM-2bit-mixed-finetuned-1x15) | 9.54 | 19.0 | -34.5 | 31.6 | -19.4 |
| DeepSeek-Coder - Base| 1B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) | 2.57 | 23.8 | --- | 42.0 | --- |
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-1.3b-base-AQLM-2bit-mixed-1x14) | 0.61 | 4.4 | -81.5 | 8.5 | -79.8 |
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/DeepSeek-Coder-1.3b-base-AQLM-2bit-mixed-finetuned-1x14) | 0.61 | 6.9 | -71.0 | 15.5 | -63.1 |
| | 7B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | 13.48 | 41.8 | --- | 42.6 | --- |
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-mixed-1x15) | 2.27 | 26.2 | -37.3 | 29.1 | -31.7 |
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/DeepSeek-Coder-6.7b-base-AQLM-2bit-mixed-finetuned-1x15) | 2.27 | 30.1 | -28.0 | 31.0 | -27.2 |
| | 33B | [Float16](https://huggingface.co/deepseek-ai/deepseek-coder-33b-base) | 62.16 | 55.5 | --- | 57.0 | --- |
| | | [2-bit](https://huggingface.co/Devy1/DeepSeek-Coder-33b-base-AQLM-2bit-mixed-1x15) | 9.38 | 36.9 | -33.5 | 39.2 | -31.2 |
| | | [2-bit + Finetuning](https://huggingface.co/Devy1/DeepSeek-Coder-33b-base-AQLM-2bit-mixed-finetuned-1x15) | 9.38 | 39.8 | -28.3 | 44.0 | -22.8 |
## 4. Replication
The scripts used to quantize and evaluate the models are available in our GitHub repository ([link](https://github.com/Devy99/lowbit-quantization)).
Model predictions, statistical results, and datasets are instead available in our Zenodo repository ([link](https://doi.org/10.5281/zenodo.13752774)). |