tokenizer.model
#1
by
iyadycb
- opened
Hi, I'm trying to quantize this model with llama.cpp but it complains that tokenizer.model is missing so I took the file from https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2. Would this negatively affect anything?
no different, should be ok
Just quantized it to Q5_K_M, seems to work great. Thanks for making this model available!
Wylo, would it possible to upload your quantized model? I have lack of available free space to test this. Thanks in advance, and thanks to huseinzol and the team!
@prsyahmi I uploaded it here https://huggingface.co/Wylo/malaysian-mistral-7b-32k-instructions-v3.5-GGUF. I didn't test it so hopefully it works.
iyadycb
changed discussion status to
closed
Released v4 version, https://huggingface.co/mesolitica/malaysian-mistral-7b-32k-instructions-v4