llama.cpp error with model

#2
by lazyDataScientist - opened

Not sure if anyone else has ran into this issue when loading the model using llama.cpp.
Terminal Command:

./llama-server -m "/home/username/models/Rocinante-12B-v1-Q8_0.gguf" -c 2048

Error Message:

llama_model_load: error loading model: error loading model hyperparameters: invalid n_rot: 128, expected 160
llama_load_model_from_file: failed to load model

Use Kobold

Sign up or log in to comment