Low quality responses compared to NeuralBeagle14 on KCPP using q8
I don't know how to make this model work as it should, for me it's far from being as good as neuralbeagle14 for conversation AND roleplay, did anybody else notice that and managed to find a fix please? I'd like to use the intended potential of the model...
Not the fix you're looking for, but can you try Monarch-7B? I made GGUFs for you: https://huggingface.co/mlabonne/Monarch-7B-GGUF/tree/main
In my tests, NeuralBeagle14 was terrible at conversation and RP but maybe it depends on the use case.
Not the fix you're looking for, but can you try Monarch-7B? I made GGUFs for you: https://huggingface.co/mlabonne/Monarch-7B-GGUF/tree/main
In my tests, NeuralBeagle14 was terrible at conversation and RP but maybe it depends on the use case.
What's the prompt format for AlphaMonarch 7b?
What's the prompt format for AlphaMonarch 7b?
Mistral Instruct.