MN-12B-WolFrame / README.md
DoppelReflEx's picture
Update README.md
8619978 verified
metadata
license: cc-by-nc-4.0
base_model:
  - crestf411/MN-Slush
  - IntervitensInc/Mistral-Nemo-Base-2407-chatml
  - DoppelReflEx/MN-12B-Mimicore-GreenSnake
  - cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
library_name: transformers
tags:
  - mergekit
  - merge

What is this?

Previous name was WhiteSnake-V2, but the eval scores is not good, so I decide to rename it. Very good in creative writing and RP, ERP. Not good in Math.

It's main goal is to break the origin WhiteSnake in eval and real usecase, but nothing too good, just decent.

GGUF, thank mradermacher a lots: https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-4-GGUF

My own Q6_K: https://huggingface.co/DoppelReflEx/MN-12B-WolFrame-Q6_K-GGUF

Merge Details

### Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
 - model: cgato/Nemo-12b-Humanize-KTO-Experimental-Latest
   parameters:
     density: 0.9
     weight: 1
 - model: DoppelReflEx/MN-12B-Mimicore-GreenSnake
   parameters:
     density: 0.6
     weight: 0.8
 - model: crestf411/MN-Slush
   parameters:
     density: 0.7
     weight: 0.5
merge_method: dare_ties
base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
tokenizer_source: base