File size: 731 Bytes
672a9cb
 
 
91823a8
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
---
license: other
---
This is the gptq 4bit quantization of this model: https://huggingface.co/ausboss/llama-13b-supercot

This quantization was made by using this repository: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton

And I used the triton branch with all the gptq implementations available (true_sequential + act_order + groupsize 128)
CUDA_VISIBLE_DEVICES=0 python llama.py ./llama-13b-SuperCOT-4bit-TRITON c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors llama-13b-SuperCOT-4bit-TRITON.safetensors

To use the triton model on oobabooga's webui, you can refer to this post to get rid of all the errors you can encounter: https://github.com/oobabooga/text-generation-webui/issues/734