sometimesanotion
commited on
Commit
•
7b696a9
1
Parent(s):
bcc66e3
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,61 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
tags:
|
4 |
+
- mergekit
|
5 |
+
- merge
|
6 |
+
license: apache-2.0
|
7 |
+
base_model:
|
8 |
+
- sometimesanotion/Lamarck-14B-v0.1-experimental
|
9 |
+
- arcee-ai/Virtuoso-Small
|
10 |
+
- CultriX/SeQwence-14Bv1
|
11 |
+
- CultriX/SeQwence-14B-EvolMergev1
|
12 |
+
- huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
|
13 |
+
language:
|
14 |
+
- en
|
15 |
+
---
|
16 |
+
![Lamarck.webp](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.2-experimental/resolve/main/Lamarck.webp)
|
17 |
+
---
|
18 |
+
|
19 |
+
Lamarck-14B version 0.3 is strongly based on [arcee-ai/Virtuoso-Small](https://huggingface.co/arcee-ai/Virtuoso-Small) as a diffuse influence for prose and reasoning. Arcee's pioneering use of distillation and innovative merge techniques create a diverse knowledge pool for its models.
|
20 |
+
|
21 |
+
The overall strategy:
|
22 |
+
For inclusion, three model_stocks specialized on reasoning, instruction following, and prose quality.
|
23 |
+
For refinement on Virtuoso as a base model, DELLA and SLERP merges of the model_stock merges and additional re-emphasis of particularly interesting ancestors.
|
24 |
+
For integration, a SLERP merge of instruction-following and reason+prose branches.
|
25 |
+
For finalization and a little bit of abliteration, TIES with a light touch from [huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2](http://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2).
|
26 |
+
|
27 |
+
### Ancestor Models
|
28 |
+
|
29 |
+
**Top influences:** These ancestors are base models and present in the model_stocks, but are heavily re-emphasized in the DELLA and SLERP merges.
|
30 |
+
|
31 |
+
- **[arcee-ai/Virtuoso-Small](https://huggingface.co/arcee-ai/Virtuoso-Small)** - A brand new model from Arcee, refined from the notable cross-architecture Llama-to-Qwen distillation [arcee-ai/SuperNova-Medius](https://huggingface.co/arcee-ai/SuperNova-Medius). The first two layers are nearly exclusively from Virtuoso. It has proven to be a well-rounded performer, and contributes a noticeable boost to the model's prose quality.
|
32 |
+
|
33 |
+
- **[CultriX/SeQwence-14B-EvolMerge](http://huggingface.co/CultriX/SeQwence-14B-EvolMerge)** - A well-rounded model, with interesting gains for instruction following while remaining strong for reasoning.
|
34 |
+
|
35 |
+
- **[CultriX/Qwen2.5-14B-Wernicke](http://huggingface.co/CultriX/Qwen2.5-14B-Wernicke)** - A top performer for Arc and GPQA, Wernicke is re-emphasized in small but highly-ranked portions of the model.
|
36 |
+
|
37 |
+
- **[huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2](http://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2)** - Merged with higher weight and density , both for re-instructing and abliterating effect
|
38 |
+
|
39 |
+
The model stocks have a diffuse influence, and they include:
|
40 |
+
|
41 |
+
**Instruction**:
|
42 |
+
- ** [arcee-ai/Virtuoso-Small](http://huggingface.co/arcee-ai/Virtuoso-Small)**
|
43 |
+
- ** [huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2](http://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2)**
|
44 |
+
- ** [sthenno-com/miscii-14b-1028](http://huggingface.co/sthenno-com/miscii-14b-1028)**
|
45 |
+
- ** [tanliboy/lambda-qwen2.5-14b-dpo-test](http://huggingface.co/tanliboy/lambda-qwen2.5-14b-dpo-test)**
|
46 |
+
|
47 |
+
**Reason**:
|
48 |
+
|
49 |
+
- ** [arcee-ai/Virtuoso-Small](http://huggingface.co/arcee-ai/Virtuoso-Small)**
|
50 |
+
- ** [CultriX/Qwen2.5-14B-Wernicke](http://huggingface.co/CultriX/Qwen2.5-14B-Wernicke)**
|
51 |
+
- ** [CultriX/SeQwence-14B-EvolMerge](http://huggingface.co/CultriX/SeQwence-14B-EvolMerge)**
|
52 |
+
- ** [VAGOsolutions/SauerkrautLM-v2-14b-DPO](http://huggingface.co/VAGOsolutions/SauerkrautLM-v2-14b-DPO)**
|
53 |
+
|
54 |
+
**Prose**:
|
55 |
+
|
56 |
+
- ** [allura-org/TQ2.5-14B-Sugarquill-v1](http://huggingface.co/allura-org/TQ2.5-14B-Sugarquill-v1)**
|
57 |
+
- ** [arcee-ai/Virtuoso-Small](http://huggingface.co/arcee-ai/Virtuoso-Small)**
|
58 |
+
- ** [EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2](http://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2)**
|
59 |
+
- ** [oxyapi/oxy-1-small](http://huggingface.co/oxyapi/oxy-1-small)**
|
60 |
+
- ** [sthenno-com/miscii-14b-1028](http://huggingface.co/sthenno-com/miscii-14b-1028)**
|
61 |
+
- ** [underwoods/medius-erebus-magnum-14b](http://huggingface.co/underwoods/medius-erebus-magnum-14b)**
|