File size: 527 Bytes
8688ddd
 
 
54579f2
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
---
license: other
---
This is the gptq 4bit quantization of this model: https://huggingface.co/elinas/chronos-13b

This quantization was made by using this repository: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton

And I used the triton branch with all the gptq implementations available (true_sequential + act_order + groupsize 128)

CUDA_VISIBLE_DEVICES=0 python llama.py ./chronos-13b-GPTQ-Triton  c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors manticore-13b-4bit-128g.safetensors