grimjim's picture
Update README.md
435ea0c verified
---
base_model: grimjim/kukulemon-7B
library_name: transformers
quanted_by: grimjim
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# kukulemon-7B-8.0bpw-h8_exl2
This is an 8.0bpw h8 exl2 quant of a merger of two similar models with strong reasoning, hopefully resulting in "dense" encoding of said reasoning, was merged with a model targeting roleplay.
I've tested with ChatML prompts with temperature=1.1 and minP=0.03. The model itself supports Alpaca format prompts. The model claims a context length of 32K, but I found it lost coherence after 8K in informal testing. I prefer to stick with 8.0bpw h8 exl2 or Q8_0 GGUF for maximum coherence.
Alternative downloads:
- [iMatrix GGUF quants courtesy of Lewdiculous](https://huggingface.co/Lewdiculous/kukulemon-7B-GGUF-IQ-Imatrix)
- [fp16 safetensors](https://huggingface.co/grimjim/kukulemon-7B)
- [GGUF quants](https://huggingface.co/grimjim/kukulemon-7B-GGUF)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B](https://huggingface.co/grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B)
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B
layer_range: [0, 32]
- model: KatyTheCutie/LemonadeRP-4.5.3
layer_range: [0, 32]
# or, the equivalent models: syntax:
# models:
merge_method: slerp
base_model: KatyTheCutie/LemonadeRP-4.5.3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```