leo-mistral-hessianai-7b-chat for privateGPT
Hello,
I wonder if I can somehow use this model in my privatGPT setup at home.
Right now privateGPT uses mistral-7b-instruct-v0.2.Q4_K_M.gguf as a model.
Is it possibel to use this version instead?
Kind regrads Dom
Yeah should be possible! You can find a quantized GGUF version here: https://huggingface.co/TheBloke/Leo-Mistral-Hessianai-7B-Chat-GGUF
Hi Björn,
So I got the model and put it in the right folder in my Windows Ubuntu WSL enviroment via VS Code.
Right now I am struggeling how to tell privateGPT to rerun poetry to use the new version.
I'm using this guide "https://medium.com/@docteur_rs/installing-privategpt-on-wsl-with-gpu-support-5798d763aa31" and it pulls the repo directly from hugginface.
How did you learn about this AI "programming", any suggestions where to start?
Thanks Dom
I learned with lots of trial and error :)
I know nothing about the privateGPT project but if you're having issues with installation just post an issue on their GitHub an I'm sure someone there can help you out.