PsyMedLewd. A TIES merge of two of my favourite models for scifi stories: - [Undi95/MXLewd-L2-20B](https://huggingface.co/Undi95/MXLewd-L2-20B) - [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B) Good but superceded by [Elfrino/PsyMedLewdREMM-20B-TIES-Q5_K_M-GGUF](https://huggingface.co/Elfrino/PsyMedLewdREMM-20B-TIES-Q5_K_M-GGUF) Smart with unique and solid prose with that lovely sprinkle of sci fi terminology that PsyMedRP loves to throw in, still in testing... ![Warning: Bad alien gangster ahead!](https://huggingface.co/Elfrino/PsyMedLewd-20B-TIES-Q5_K_M-GGUF/resolve/main/kingpin1.jpg) RECOMMENDED SETTINGS FOR ALL PsyMedLewd VERSIONS (based on KoboldCPP): Temperature - 1.3 Max Ctx. Tokens - 4096 Top p Sampling - 0.99 Repetition Penalty - 1.09 Amount to Gen. - 512 Prompt template: Alpaca or ChatML ################################################################################################################################################# --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Elfrino/PsyMedLewd_v4](https://huggingface.co/Elfrino/PsyMedLewd_v4) as a base. ### Models Merged The following models were included in the merge: * [Undi95/MXLewd-L2-20B](https://huggingface.co/Undi95/MXLewd-L2-20B) * [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Undi95/PsyMedRP-v1-20B parameters: density: 0.6 weight: 0.6 - model: Undi95/MXLewd-L2-20B parameters: density: 0.4 weight: 0.4 merge_method: ties base_model: Elfrino/PsyMedLewd_v4 parameters: normalize: false int8_mask: true dtype: float16 ```