GGUF version
#23
by
reddiamond
- opened
I was wondering if you will be converting GGML versions of LLaMA 2 chat to GGUF in near future?
Same here. Llama cpp did not support back ggml bin version.
Same here. I'm trying to use llama-cpp-python to run Llama-2-7B-Chat-GGML but it is not possible anymore, since since version 0.1.79 requires the model to be GGUF (current version 0.1.83).
This should solve your problem
https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/README.md
reddiamond
changed discussion status to
closed