--- base_model: - anthracite-org/magnum-v4-12b - Nitral-AI/Captain_BMO-12B library_name: transformers tags: - mergekit - merge --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Captain_BMO-0.420-Magnum-12B-GGUF This is quantized version of [Nitral-AI/Captain_BMO-0.420-Magnum-12B](https://huggingface.co/Nitral-AI/Captain_BMO-0.420-Magnum-12B) created using llama.cpp # Original Model Card # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [anthracite-org/magnum-v4-12b](https://huggingface.co/anthracite-org/magnum-v4-12b) * [Nitral-AI/Captain_BMO-12B](https://huggingface.co/Nitral-AI/Captain_BMO-12B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Nitral-AI/Captain_BMO-12B layer_range: [0, 40] - model: anthracite-org/magnum-v4-12b layer_range: [0, 40] merge_method: slerp base_model: Nitral-AI/Captain_BMO-12B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.420 dtype: bfloat16 ```