|
--- |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
license: apache-2.0 |
|
base_model: |
|
- sometimesanotion/Lamarck-14B-v0.1-experimental |
|
- arcee-ai/Virtuoso-Small |
|
- CultriX/SeQwence-14B-EvolMerge |
|
- CultriX/Qwen2.5-14B-Wernicke |
|
- huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2 |
|
language: |
|
- en |
|
--- |
|
![Lamarck.webp](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.3-experimental/resolve/main/Lamarck.webp) |
|
--- |
|
|
|
Lamarck-14B version 0.3 is strongly based on [arcee-ai/Virtuoso-Small](https://huggingface.co/arcee-ai/Virtuoso-Small) as a diffuse influence for prose and reasoning. Arcee's pioneering use of distillation and innovative merge techniques create a diverse knowledge pool for its models. |
|
|
|
The overall strategy: |
|
For inclusion, three model_stocks specialized on reasoning, instruction following, and prose quality. |
|
For refinement on Virtuoso as a base model, DELLA and SLERP merges of the model_stock merges and additional re-emphasis of particularly interesting ancestors. |
|
For integration, a SLERP merge of instruction-following and reason+prose branches. |
|
For finalization and a little bit of abliteration, TIES with a light touch from [huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2](http://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2). |
|
|
|
### Ancestor Models |
|
|
|
**Top influences:** These ancestors are base models and present in the model_stocks, but are heavily re-emphasized in the DELLA and SLERP merges. |
|
|
|
- **[arcee-ai/Virtuoso-Small](https://huggingface.co/arcee-ai/Virtuoso-Small)** - A brand new model from Arcee, refined from the notable cross-architecture Llama-to-Qwen distillation [arcee-ai/SuperNova-Medius](https://huggingface.co/arcee-ai/SuperNova-Medius). The first two layers are nearly exclusively from Virtuoso. It has proven to be a well-rounded performer, and contributes a noticeable boost to the model's prose quality. |
|
|
|
- **[CultriX/SeQwence-14B-EvolMerge](http://huggingface.co/CultriX/SeQwence-14B-EvolMerge)** - A well-rounded model, with interesting gains for instruction following while remaining strong for reasoning. |
|
|
|
- **[CultriX/Qwen2.5-14B-Wernicke](http://huggingface.co/CultriX/Qwen2.5-14B-Wernicke)** - A top performer for Arc and GPQA, Wernicke is re-emphasized in small but highly-ranked portions of the model. |
|
|
|
- **[huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2](http://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2)** - Merged with higher weight and density , both for re-instructing and abliterating effect |
|
|
|
The model stocks have lower-weighted, diffuse influence, and they include: |
|
|
|
**Instruction**: |
|
|
|
- [arcee-ai/Virtuoso-Small](http://huggingface.co/arcee-ai/Virtuoso-Small) |
|
- [huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2](http://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2) |
|
- [sthenno-com/miscii-14b-1028](http://huggingface.co/sthenno-com/miscii-14b-1028) |
|
- [tanliboy/lambda-qwen2.5-14b-dpo-test](http://huggingface.co/tanliboy/lambda-qwen2.5-14b-dpo-test) |
|
|
|
**Reason**: |
|
|
|
- [arcee-ai/Virtuoso-Small](http://huggingface.co/arcee-ai/Virtuoso-Small) |
|
- [CultriX/Qwen2.5-14B-Wernicke](http://huggingface.co/CultriX/Qwen2.5-14B-Wernicke) |
|
- [CultriX/SeQwence-14B-EvolMerge](http://huggingface.co/CultriX/SeQwence-14B-EvolMerge) |
|
- [VAGOsolutions/SauerkrautLM-v2-14b-DPO](http://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-DPO) |
|
|
|
**Prose**: |
|
|
|
- [allura-org/TQ2.5-14B-Sugarquill-v1](http://huggingface.co/allura-org/TQ2.5-14B-Sugarquill-v1) |
|
- [arcee-ai/Virtuoso-Small](http://huggingface.co/arcee-ai/Virtuoso-Small) |
|
- [EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2](http://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2) |
|
- [oxyapi/oxy-1-small](http://huggingface.co/oxyapi/oxy-1-small) |
|
- [sthenno-com/miscii-14b-1028](http://huggingface.co/sthenno-com/miscii-14b-1028) |
|
- [underwoods/medius-erebus-magnum-14b](http://huggingface.co/underwoods/medius-erebus-magnum-14b) |
|
|