--- library_name: transformers license: apache-2.0 language: - en - ja - zh pipeline_tag: text2text-generation --- # Model Card for Model ID Merged and GPTQ quantized version of [rayliuca/TRagx-internlm2-7b](https://huggingface.co/rayliuca/TRagx-internlm2-7b) Note: I'm having some difficulties quantizing the models using GPTQ. Mistral and NeuralOmniBeagle's GPTQ models have significantly degraded output, while quantized TowerInstruct v0.2 was not working out right While this quantized model for InternLM2 seems to work all right, the translation accuracy is not validated. These AWQ quantized models are recommended: - [rayliuca/TRagx-AWQ-NeuralOmniBeagle-7B](https://huggingface.co/rayliuca/TRagx-AWQ-NeuralOmniBeagle-7B) - [rayliuca/TRagx-AWQ-Mistral-7B-Instruct-v0.2](https://huggingface.co/rayliuca/TRagx-AWQ-Mistral-7B-Instruct-v0.2) ## GPTQ Dataset Qutanized with nsamples=45 * 3 languages [ja, zh, en] from the c4 dataset ## License See the original InternLM2 repo https://huggingface.co/internlm/internlm2-7b#open-source-license