"TheBloke/Llama-2-7b-Chat-GGUF does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack."
#11
by
swvajanyatek
- opened
What am I doing wrong here?
from transformers import AutoModelForCausalLM, AutoTokenizer
model_main = "TheBloke/Llama-2-7b-Chat-GGUF"
model = AutoModelForCausalLM.from_pretrained(model_main, hf=True)
You are using transformers which does not support gguf. I think you are trying to use ctransformers so instead of importing auto models from transformers import it from ctransformers
can i train this model with my data
Better train some model which is truly free software, unlike LLAMA from META with the restrictive license. You could train Phi-3.5-mini or -small or Qwen versions, MIstral, etc.