--- library_name: transformers tags: - mistral - quantized - text-generation-inference - merge pipeline_tag: text-generation inference: false license: cc-by-nc-4.0 --- # **GGUF-Imatrix quantizations for [SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE](https://huggingface.co/SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE/).** # What does "Imatrix" mean? It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance. One of the benefits of using an Imatrix is that it can lead to better model performance, especially when the calibration data is diverse. More information: [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) For --imatrix data, `imatrix-Loyal-Toppy-Bruins-Maid-7B-DARE-F16.dat` was used. `Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)` Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2280](https://github.com/ggerganov/llama.cpp/releases/tag/b2280). The new **IQ3_S** quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.59.1` or higher. *If you want any specific quantization to be added, feel free to ask.* All credits belong to the [creator](https://huggingface.co/SanjiWatsuki/). # Original model information: ![image/png](https://huggingface.co/SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE/resolve/main/bruins-maid.png) ## Description This repository hosts FP16 files for **Loyal-Toppy-Bruins-Maid-7B**, a 7B model aimed at having engaging RP with solid character card adherence and being a smart cookie at the same time. Its foundation is [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha), notable for its performance in the LMSYS Chatbot Arena, even surpassing GPT-3.5-Turbo-1106. The model incorporates [rwitz/go-bruins-v2](https://huggingface.co/rwitz/go-bruins-v2), a [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) derivative with Alpaca RP data tuning. The other foundational model is [chargoddard/loyal-piano-m7](https://huggingface.co/chargoddard/loyal-piano-m7), chosen for its strong RP performance and Alpaca format training, with a diverse dataset including PIPPA, rpbuild, and LimaRP. [Undi95/Toppy-M-7B](https://huggingface.co/Undi95/Toppy-M-7B), known for its creativity, brings in useful RP data from various sources. It ranks first among 7B models on [OpenRouter](https://openrouter.ai/rankings) for a good reason. [NeverSleep/Noromaid-7b-v0.1.1](https://huggingface.co/NeverSleep/Noromaid-7b-v0.1.1), a Mistral finetune with unique RP data not present in other models, was also added for bringing in a unique RP dataset and being a well-regarded RP model. The models were merged using the DARE ties method, with a targeted 1.2 absolute weight and high density (0.5-0.6), as discussed in the [MergeKit GitHub Repo](https://github.com/cg123/mergekit/issues/26). Currently, this model ranks at the top of my personal RP unit test benchmark and scored a very solid 20 on [lilblam's LLM Logic Test](https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit#gid=1278290632). My first impressions of it for RPing are very good but, admittedly, this model came out of the oven today so I haven't played it with it too much 😊 ### The sauce ``` models: # Top-Loyal-Bruins-Maid-DARE-7B_v2 - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: rwitz/go-bruins-v2 # MetamathCybertronStarling base parameters: weight: 0.5 density: 0.6 - model: chargoddard/loyal-piano-m7 # Pull in some PIPPA/LimaRP/Orca/rpguild parameters: weight: 0.5 density: 0.6 - model: Undi95/Toppy-M-7B parameters: weight: 0.1 density: 0.5 - model: NeverSleep/Noromaid-7b-v0.1.1 parameters: weight: 0.1 density: 0.5 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: normalize: false int8_mask: true dtype: bfloat16 ``` ## Prompt template: Custom format, or Alpaca ### Custom format: I found the best SillyTavern results from using the Noromaid template. SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json). Otherwise, I tried to ensure that all of the underlying merged models were Alpaca favored. ### Alpaca: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ```