--- base_model: - bunnycore/Gemma2-2B-Chaos - bunnycore/Gemma2-2b-Evol-lora - SaisExperiments/Gemma-2-2B-Stheno-Filtered - bunnycore/Gemma2-2b-Evol-lora - SaisExperiments/Gemma-2-2B-Opus-Instruct - bunnycore/Gemma2-2b-Evol-lora - MinnieMin/gemma-2-2b-it-ThinkLink - bunnycore/Gemma2-2b-code-lora - IlyaGusev/gemma-2-2b-it-abliterated - bunnycore/Gemma2-2B-mixed - bunnycore/Gemma2-2b-Evol-lora - RyanYr/gemma-2-2b-it_CoT-it_SFT - IlyaGusev/gemma-2-2b-it-abliterated - bunnycore/Gemma2-2b-code-lora library_name: transformers tags: - mergekit - merge --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Gemma-2-2b-TitanFusion-GGUF This is quantized version of [bunnycore/Gemma-2-2b-TitanFusion](https://huggingface.co/bunnycore/Gemma-2-2b-TitanFusion) created using llama.cpp # Original Model Card # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [IlyaGusev/gemma-2-2b-it-abliterated](https://huggingface.co/IlyaGusev/gemma-2-2b-it-abliterated) as a base. ### Models Merged The following models were included in the merge: * [bunnycore/Gemma2-2B-Chaos](https://huggingface.co/bunnycore/Gemma2-2B-Chaos) + [bunnycore/Gemma2-2b-Evol-lora](https://huggingface.co/bunnycore/Gemma2-2b-Evol-lora) * [SaisExperiments/Gemma-2-2B-Stheno-Filtered](https://huggingface.co/SaisExperiments/Gemma-2-2B-Stheno-Filtered) + [bunnycore/Gemma2-2b-Evol-lora](https://huggingface.co/bunnycore/Gemma2-2b-Evol-lora) * [SaisExperiments/Gemma-2-2B-Opus-Instruct](https://huggingface.co/SaisExperiments/Gemma-2-2B-Opus-Instruct) + [bunnycore/Gemma2-2b-Evol-lora](https://huggingface.co/bunnycore/Gemma2-2b-Evol-lora) * [MinnieMin/gemma-2-2b-it-ThinkLink](https://huggingface.co/MinnieMin/gemma-2-2b-it-ThinkLink) + [bunnycore/Gemma2-2b-code-lora](https://huggingface.co/bunnycore/Gemma2-2b-code-lora) * [bunnycore/Gemma2-2B-mixed](https://huggingface.co/bunnycore/Gemma2-2B-mixed) + [bunnycore/Gemma2-2b-Evol-lora](https://huggingface.co/bunnycore/Gemma2-2b-Evol-lora) * [RyanYr/gemma-2-2b-it_CoT-it_SFT](https://huggingface.co/RyanYr/gemma-2-2b-it_CoT-it_SFT) * [IlyaGusev/gemma-2-2b-it-abliterated](https://huggingface.co/IlyaGusev/gemma-2-2b-it-abliterated) + [bunnycore/Gemma2-2b-code-lora](https://huggingface.co/bunnycore/Gemma2-2b-code-lora) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: IlyaGusev/gemma-2-2b-it-abliterated+bunnycore/Gemma2-2b-code-lora - model: bunnycore/Gemma2-2B-mixed+bunnycore/Gemma2-2b-Evol-lora - model: bunnycore/Gemma2-2B-Chaos+bunnycore/Gemma2-2b-Evol-lora - model: SaisExperiments/Gemma-2-2B-Opus-Instruct+bunnycore/Gemma2-2b-Evol-lora - model: SaisExperiments/Gemma-2-2B-Stheno-Filtered+bunnycore/Gemma2-2b-Evol-lora - model: RyanYr/gemma-2-2b-it_CoT-it_SFT - model: MinnieMin/gemma-2-2b-it-ThinkLink+bunnycore/Gemma2-2b-code-lora merge_method: model_stock base_model: IlyaGusev/gemma-2-2b-it-abliterated dtype: bfloat16 ```