Mistral-Nemo-Instruct-2407-exl2
Collection
A friendly reminder: change the max_seq_len in text-generation-web-ui, otherwise, you get CUDA outta memory.
β’
8 items
β’
Updated
β’
1
This is a 8.0bpw quantized version of mistralai/Mistral-Nemo-Instruct-2407 made with exllamav2.
This model is available under the Apache 2.0 License.
Join our Discord server here.
Eager to buy me a cup of 2$ coffe or iced tea?π΅β Sure, here is the link: https://ko-fi.com/drnicefellow. Please add a note on which one you want me to drink?