--- license: other --- This is the gptq 4bit quantization of this model: https://huggingface.co/elinas/chronos-13b This quantization was made by using this repository: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton And I used the triton branch with all the gptq implementations available (true_sequential + act_order + groupsize 128) CUDA_VISIBLE_DEVICES=0 python llama.py ./chronos-13b-GPTQ-Triton c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors chronos-13b-4bit-128g-ts-ao.safetensors