GGUF-IQ-Imatrix quants for ABX-AI/Infinitely-Kunodiculous-9B.
I'd like to extend my gratitude to @Lewdiculous for the inspiration to start trying to learn about merging and Imatrix quantization and help with answering my questions, and to @ Nitral-AI for the help with questions about merging as well.
Why Importance Matrix?
Importance Matrix, at least based on my testing, has shown to improve the output and performance of "IQ"-type quantizations, where the compression becomes quite heavy. The Imatrix performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied.
Related discussions in Github: [1] [2]
The imatrix.txt file that I used can be found here. The data of the imatrix is general, semi-random.
Description
This model is intended for role-playing and storywriting purposes.
This is the very firt merge I have ever tried. It seems to be working, or by the minimum does not appear to be broken :) The prime idea of this is to be an experiment and help me learn how to do merges and quants.
GGUF/IQ/Imatrix: https://huggingface.co/ABX-AI/Infinitely-Kunodiculous-9B-GGUF-IQ-Imatrix
Infinitely-Kunodiculous-9B
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: Nitral-AI/Infinitely-Laydiculous-9B
layer_range: [0, 20]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
- Downloads last month
- 6