KinglyCrow commited on
Commit
64be886
·
1 Parent(s): 4455327

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -14,4 +14,4 @@ This was quantized using:
14
 
15
  Huggingface's GPTQ implementation can be found here: https://github.com/huggingface/text-generation-inference/blob/main/server/text_generation_server/utils/gptq/quantize.py
16
 
17
- For testing and degradation purposes we've not looked at anything thoroughly, but for our usecases we did not notice any significant degradation which is inline with the claims of the GPTQ paper compared to other low bit quantization methods.
 
14
 
15
  Huggingface's GPTQ implementation can be found here: https://github.com/huggingface/text-generation-inference/blob/main/server/text_generation_server/utils/gptq/quantize.py
16
 
17
+ For testing and degradation purposes we've not yet looked at anything thoroughly, but for our usecases we did not notice any significant quality degradation which is inline with the claims of the GPTQ paper compared to other low bit quantization methods.