Lamarck-14B-v0.3 / README.md
sometimesanotion's picture
Update README.md
ff8d180 verified
|
raw
history blame
3.85 kB
metadata
library_name: transformers
tags:
  - mergekit
  - merge
license: apache-2.0
base_model:
  - sometimesanotion/Lamarck-14B-v0.1-experimental
  - arcee-ai/Virtuoso-Small
  - CultriX/SeQwence-14B-EvolMerge
  - CultriX/Qwen2.5-14B-Wernicke
  - huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
language:
  - en

Lamarck.webp

Lamarck-14B version 0.3 is strongly based on arcee-ai/Virtuoso-Small as a diffuse influence for prose and reasoning. Arcee's pioneering use of distillation and innovative merge techniques create a diverse knowledge pool for its models.

The overall strategy:
For inclusion, three model_stocks specialized on reasoning, instruction following, and prose quality. For refinement on Virtuoso as a base model, DELLA and SLERP merges of the model_stock merges and additional re-emphasis of particularly interesting ancestors. For integration, a SLERP merge of instruction-following and reason+prose branches. For finalization and a little bit of abliteration, TIES with a light touch from huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2.

Ancestor Models

Top influences: These ancestors are base models and present in the model_stocks, but are heavily re-emphasized in the DELLA and SLERP merges.

The model stocks have lower-weighted, diffuse influence, and they include:

Instruction:

Reason:

Prose: