--- license: apache-2.0 --- ## Introduce Quantizing the [NTQAI/Nxcode-CQ-7B-orpo](https://huggingface.co/NTQAI/Nxcode-CQ-7B-orpo) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.