Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
bartowski 
posted an update Oct 16
Post
16094
In regards to the latest mistral model and GGUFs for it:

Yes, they may be subpar and may require changes to llama.cpp to support the interleaved sliding window

Yes, I got excited when a conversion worked and released them ASAP

That said, generation seems to work right now and seems to mimic the output from spaces that are running the original model

I have appended -TEST to the model names in an attempt to indicate that they are not final or perfect, but if people still feel mislead and that it's not the right thing to do, please post (civilly) below your thoughts, I will highly consider pulling the conversions if that's what people think is best. After all, that's what I'm here for, in service to you all !

i see nothing wrong with it. its been clearly marked.

You marked it as TEST. Not sure why people are mad over it lol.

·

The test mark was after initial upload and after people pointed it out :) glad it is a good label though

As a roleplayer i'm very happy with the result so far (q8 in ooba's TGWUI). Very impressive writing for a "stock" 8B model and has not refused anything i have thrown at it.

Could you check this one out? Found in the wild with an interesting claim.

https://huggingface.co/noneUsername/TouchNight-Ministral-8B-Instruct-2410-HF-W8A8-Dynamic-Per-Token

It is worth noting that compared with the prince-canuma version, this version is smaller in size after quantization and its accuracy is also improved by one percentage point.

id think adding 'test' was more than enough.