--- base_model: - Nitral-AI/Poppy_Porpoise-0.72-L3-8B - Sao10K/L3-8B-Stheno-v3.2 library_name: transformers tags: - mergekit - merge pipeline_tag: text-generation license: cc-by-nc-4.0 --- # llama-3-sthenic-porpoise-v1-8B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This model is a straightforward SLERP merge of two popular models. Built with Meta Llama 3. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [Nitral-AI/Poppy_Porpoise-0.72-L3-8B](https://huggingface.co/Nitral-AI/Poppy_Porpoise-0.72-L3-8B) * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B layer_range: [0,32] - model: Sao10K/L3-8B-Stheno-v3.2 layer_range: [0,32] merge_method: slerp base_model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B parameters: t: - value: 0.5 dtype: bfloat16 ```