Can you please upload the Synatra-7B-v0.3-RP-AshhLimaRP-Mistral-7B model separately?
Can you please upload the Synatra-7B-v0.3-RP-AshhLimaRP-Mistral-7B model separately?
The original of that model is overlaid on GGUF and I can't load just the original file.
What backend for inference do you use?, if you are using Ooba booga you should be able to specify the download path like this I think.
Herman555/Synatra-v0.3-RP-AshhLimaRP-Mistral-7B-GGUF:Synatra-7B-v0.3-RP-AshhLimaRP-Mistral-7B
Alternatively you could download the model files manually and place them where the models are stored on whatever backend you use.
Sorry, I'm kinda new to this so I didn't realize when I uploaded the original model as a folder that it messed with the names which is why you couldn't load the model. I will fix this now.
Thank you so much. I just wanted to do an exl2 quantization of this model.