GGUF-Imatrix quantizations for l3utterfly/mistral-7b-v0.1-layla-v4.
All credits belong to the author.
If you like these also check out FantasiaFoundry's GGUF-Quantization-Script.
What does "Imatrix" mean?
It stands for Importance Matrix, a technique used to improve the quality of quantized models.
[1]
The Imatrix is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance and lead to better performance, especially when the calibration data is diverse.
[2]
For --imatrix data, included imatrix.dat
was used.
Using llama.cpp-b2321:
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
The new IQ3_S quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in koboldcpp-1.59.1
or higher.
If you want any specific quantization to be added, feel free to ask.
Model image:
Original model information:
Model Card
Model Description
Mistral 7B fine-tuned by the OpenHermes 2.5 dataset optimised for multi-turn conversation and character impersonation.
The dataset has been pre-processed by doing the following:
- remove all refusals
- remove any mention of AI assistant
- split any multi-turn dialog generated in the dataset into multi-turn conversations records
- added nfsw generated conversations from the Teatime dataset
- Developed by: l3utterfly
- Funded by: Layla Network
- Model type: Mistral
- Language(s) (NLP): English
- License: Apache-2.0
- Finetuned from model: Mistral 7B
Uses
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
Help & support: https://discord.gg/x546YJ6nYC
Prompt:
USER:
ASSISTANT:
- Downloads last month
- 246