File size: 1,535 Bytes
2df5a65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4aab9a8
2df5a65
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
language:
- en
library_name: transformers
tags:
- auto-gptq
- AutoRound
license: apache-2.0
---


## Model Details

This is [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) quantized with [AutoRound](https://github.com/intel/auto-round/tree/main) (asymmetric quantization) and serialized with the GPTQ format in 4-bit. The model has been created, tested, and evaluated by The Kaitchup.

Details on the quantization process and how to use the model here:
[The Best Quantization Methods to Run Llama 3.1 on Your GPU](https://newsletter.kaitchup.com/p/the-best-quantization-methods-to)

It is possible to fine-tune an adapter on top of it following the QLoRA methodology. More about this here:
[QLoRA with AutoRound: Cheaper and Better LLM Fine-tuning on Your GPU](https://newsletter.kaitchup.com/p/qlora-with-autoround-cheaper-and)

I used these hyperparameters for quantization:

```
bits, group_size = 4, 128

autoround = AutoRound(model, tokenizer, nsamples=512, iters=1000, low_gpu_mem_usage=False, bits=bits, group_size=group_size)

autoround.quantize()
output_dir = "./tmp_autoround"
autoround.save_quantized(output_dir, format='auto_gptq', inplace=True) 
```

Evaluation results  (zero-shot evaluation with lm_eval):

![arc_challenge, musr, gpqa, mmlu_pro, mmlu….png](https://cdn-uploads.huggingface.co/production/uploads/64b93e6bd6c468ac7536607e/ExiQHtJf981JcUsHcbZW9.png)


- **Developed by:** [The Kaitchup](https://newsletter.kaitchup.com/)
- **Language(s) (NLP):** English
- **License:** Apache 2.0 license