--- base_model: [] tags: - mergekit - merge --- # TestBunny-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). Using the updated config file and a rope_alpha=2.5, this should be able to handle a context up to 16384 (formerly would start devolving after 4k). May be unstable past that - have so far been unable to get coherency out fully to 32k. ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using D:/ModelMerge/models/Mistral-7B-v0.1 as a base. ### Models Merged The following models were included in the merge: * D:\ModelMerge\merges\TestBunny-7B\bunny-a * D:/ModelMerge/models/LimaRP-Mistral-7B-v0.1 ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: D:/ModelMerge/models/Mistral-7B-v0.1 dtype: float16 merge_method: task_arithmetic slices: - sources: - layer_range: [0, 32] model: D:\ModelMerge\merges\TestBunny-7B\bunny-a parameters: weight: 1.0 - layer_range: [0, 32] model: D:/ModelMerge/models/LimaRP-Mistral-7B-v0.1 parameters: weight: 0.08 - layer_range: [0, 32] model: D:/ModelMerge/models/Mistral-7B-v0.1 ```