llama.cpp ggml 4-bit quantized please
#1
by
Sam2x
- opened
Hi,
First thank you for the effort to do this model , i think it will be great to generate a quantized version to test it on cpu!
Best,
Hi,
First thank you for the effort to do this model , i think it will be great to generate a quantized version to test it on cpu!
Best,