File size: 4,002 Bytes
2ea6ac7 4dcd23f 2ea6ac7 4dcd23f 1bf915a add9a15 e31ea1c 4dcd23f 2ea6ac7 b09d893 e31ea1c 2ea6ac7 8a418e7 2ea6ac7 37ab457 e31ea1c 2ea6ac7 5e09163 e31ea1c ad7b4fd 2ea6ac7 ad7b4fd f7450af 2ea6ac7 1899f2d 2ea6ac7 9f825cd 144d3fb 2ea6ac7 1bf915a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- arcee-ai/Virtuoso-Small
- Qwen2.5-14B-Qwenvergence-model_stock
- sometimesanotion/Qwen2.5-14B-Qwenvergence-model_stock
metrics:
- accuracy
pipeline_tag: text-generation
---
![Lamarck.webp](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.4-Qwenvergence/resolve/main/Lamarck.webp)
---
Lamarck 14B v0.4 Qwenvergence: it's a big step up for Lamarck in terms of quality. All the same ingredients are involved as in previous releases of Lamarck; they are more effectively combined. This model features slightly improved reasoning from 0.3, but the multi-language and prose are greatly improved.
## Merge Details
This model was initialized from model_stock, and refined from there. No fine-tuning, or use of models apart from those listed as the contents of Qwen2.5-14B-Qwenvergence-model_stock except for a very mild application of [huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2).
### Models Merged
**Top influences:** These ancestors are in the Qwenvergence model_stock, reinforced in later steps:
- **[arcee-ai/Virtuoso-Small](https://huggingface.co/arcee-ai/Virtuoso-Small)** - A brand new model from Arcee, refined from the notable cross-architecture Llama-to-Qwen distillation [arcee-ai/SuperNova-Medius](https://huggingface.co/arcee-ai/SuperNova-Medius). The first two layers are nearly exclusively from Virtuoso. It has proven to be a well-rounded performer, and contributes a noticeable boost to the model's prose quality.
- **[CultriX/SeQwence-14B-EvolMerge](http://huggingface.co/CultriX/SeQwence-14B-EvolMerge)** - A top contender on reasoning benchmarks.
- **[VAGOsolutions/SauerkrautLM-v2-14b-DPO](https://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-DPO)** - This model's influence is understated, but aids BBH and coding capability.
- **[v000000/Qwen2.5-Lumen-14B](https://huggingface.co/v000000/Qwen2.5-Lumen-14B)** - A leading influence for prose quality.
**Prose added:**
The prose quality has taken a leap, no doubt also to the way [EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2), [sthenno-com/miscii-14b-1028](https://huggingface.co/sthenno-com/miscii-14b-1028), [oxyapi/oxy-1-small](https://huggingface.co/oxyapi/oxy-1-small), and [underwoods/medius-erebus-magnum-14b](https://huggingface.co/underwoods/medius-erebus-magnum-14b) were applied.
### Configuration
The following YAML configurations were used to initialize and finalize this model:
```yaml
name: Qwenvergence-model_stock
merge_method: model_stock
base_model: Qwen/Qwen2.5-14B
tokenizer_source: base
parameters:
int8_mask: true
normalize: true
rescale: false
models:
- model: allura-org/TQ2.5-14B-Sugarquill-v1
- model: oxyapi/oxy-1-small
- model: sthenno-com/miscii-14b-1028
- model: underwoods/medius-erebus-magnum-14b
- model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
- model: CultriX/SeQwence-14B-EvolMerge
- model: arcee-ai/Virtuoso-Small
- model: VAGOsolutions/SauerkrautLM-v2-14b-DPO
- model: v000000/Qwen2.5-Lumen-14B
dtype: bfloat16
out_dtype: bfloat16
---
# Experimental merge methods involving above models
---
name: Lamarck-14B-v0.4-Qwenvergence
merge_method: ties
base_model: sometimesanotion/lamarck-14b-base
tokenizer_source: base
parameters:
density: 1.00
weight: 1.00
int8_mask: true
normalize: true
rescale: false
models:
- model: merges/Qwen2.5-14B-Qwenvergence-slerp
parameters:
weight: 1.00
density: 1.00
- model: arcee-ai/Virtuoso-Small
parameters:
weight: 1.00
density: 1.00
``` |