--- base_model: - beomi/Llama-3-Open-Ko-8B - Upstage/SOLAR-10.7B-Instruct-v1.0 library_name: transformers tags: - mergekit - merge --- # new-merged-model This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) * [Upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/Upstage/SOLAR-10.7B-Instruct-v1.0) ### Configuration The following YAML configuration was used to produce this model: ```yaml # slices: # - sources: # - model: beomi/Llama-3-Open-Ko-8B # layer_range: [0, 32] # - model: upstage/SOLAR-10.7B-Instruct-v1.0 # layer_range: [0, 32] # merge_method: slerp # base_model: upstage/SOLAR-10.7B-Instruct-v1.0 # parameters: # t: # - filter: self_attn # value: [0, 0.5, 0.3, 0.7, 1] # - filter: mlp # value: [1, 0.5, 0.7, 0.3, 0] # - value: 0.5 # fallback for rest of tensors # dtype: float16 # name: upstage-slerp # models: # - model: upstage-slerp # parameters: # density: [1, 0.7, 0.1] # density gradient # weight: 1.0 # - model: chihoonlee10/T3Q-ko-solar-dpo-v6.0 # parameters: # density: 0.33 # weight: # - filter: mlp # value: 0.5 # - value: 0 # merge_method: ties # base_model: meta-llama/Meta-Llama-3-8B-Instruct # parameters: # normalize: true # int8_mask: true # dtype: float16 # name: llama3-slerp-ties slices: - sources: - model: beomi/Llama-3-Open-Ko-8B layer_range: [0, 32] - model: Upstage/SOLAR-10.7B-Instruct-v1.0 layer_range: [0, 32] merge_method: slerp base_model: beomi/Llama-3-Open-Ko-8B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 ```