Please add support for your model in llama.cpp so people without GPUs can run it.
· Sign up or log in to comment