--- base_model: - aks1s/13Aks-18 - starnet/19star03 - OwOpeepeepoopoo/ZZZZZsubmission7 - OwOpeepeepoopoo/ZZZZZsubmission5 - starnet/15star03 - irusl/05Ir-4 library_name: transformers tags: - mergekit - merge --- # output_2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [irusl/05Ir-4](https://huggingface.co/irusl/05Ir-4) as a base. ### Models Merged The following models were included in the merge: * [aks1s/13Aks-18](https://huggingface.co/aks1s/13Aks-18) * [starnet/19star03](https://huggingface.co/starnet/19star03) * [OwOpeepeepoopoo/ZZZZZsubmission7](https://huggingface.co/OwOpeepeepoopoo/ZZZZZsubmission7) * [OwOpeepeepoopoo/ZZZZZsubmission5](https://huggingface.co/OwOpeepeepoopoo/ZZZZZsubmission5) * [starnet/15star03](https://huggingface.co/starnet/15star03) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: irusl/05Ir-4 parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 - model: starnet/19star03 parameters: density: 0.5 weight: [0, 0.3, 0.7, 1] # weight gradient - model: starnet/15star03 parameters: density: 0.33 weight: - filter: mlp value: 0.5 - value: 0 - model: aks1s/13Aks-18 parameters: density: 0.33 weight: - filter: mlp value: 0.5 - value: 0 - model: OwOpeepeepoopoo/ZZZZZsubmission7 parameters: density: 0.33 weight: - filter: mlp value: 0.5 - value: 0 - model: OwOpeepeepoopoo/ZZZZZsubmission5 parameters: density: 0.33 weight: - filter: mlp value: 0.5 - value: 0 merge_method: ties base_model: irusl/05Ir-4 parameters: normalize: true int8_mask: true dtype: float16 ```