Transformers
GGUF
English
mistral

Model type 'mistral' is not supported.

#4
by Rishu9401 - opened

from ctransformers import AutoModelForCausalLM
llm = AutoModelForCausalLM.from_pretrained("yarn-mistral-7b-128k.Q4_K_M.gguf",model_type = "mistral")
RuntimeError: Failed to create LLM 'mistral' from 'yarn-mistral-7b-128k.Q4_K_M.gguf'.

How to go about solving this error??

You can use koboldcpp for loading this, I was also getting this error while trying to quantise on my own using llamacpp.
I found kobold works well with this llm, serve it using koboldcpp and then you can use langchain koboldmodel wrapper to interact with it.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment