Use in Ollama/lm studio etc
#1
by
Clausss
- opened
is possible to use in ollama/lm studio/llama cpp etc? or I must have python packpage?
Thanks for posting the issue. A community user has created an ollama version of Bonito-v1 (with Mistral as the backbone). I will create one with the llama-3.1 checkpoint if there is more interest.
nihalnayak
changed discussion status to
closed
I'm interested. Do you think it could be useful to have a longer text version?
I’m interested in using a GGUF file with 4-bit quantization on this model with my RTX 4060 Ti (16GB).