--- base_model: - rirv938/gpt2_sequence_classification_base - rirv938/reward_gpt2_preference_24m_e2 - ChaiML/reward_models_100_170000000_cp_332032 library_name: transformers tags: - mergekit - merge --- # gpt2_ties_merge_ab_with_classic_e2_d99 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [rirv938/gpt2_sequence_classification_base](https://huggingface.co/rirv938/gpt2_sequence_classification_base) as a base. ### Models Merged The following models were included in the merge: * [rirv938/reward_gpt2_preference_24m_e2](https://huggingface.co/rirv938/reward_gpt2_preference_24m_e2) * [ChaiML/reward_models_100_170000000_cp_332032](https://huggingface.co/ChaiML/reward_models_100_170000000_cp_332032) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: rirv938/gpt2_sequence_classification_base # no parameters necessary for base model - model: ChaiML/reward_models_100_170000000_cp_332032 parameters: density: 0.9 weight: 0.5 - model: rirv938/reward_gpt2_preference_24m_e2 parameters: density: 0.9 weight: 0.5 merge_method: ties base_model: rirv938/gpt2_sequence_classification_base parameters: normalize: true dtype: float16 ```