I don't think llama.cpp supports this model, do you have a branch I could use to run this?
· Sign up or log in to comment