Edit model card

You can run it on 11G mem GPU,quantize base QuIP# method, a weights-only quantization method that is able to achieve near fp16 performance using only 2 bits per weight.

url:https://github.com/Cornell-RelaxML/quip-sharp/tree/release20231203

Downloads last month
5
Safetensors
Model size
5.11B params
Tensor type
FP16
·
I16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.