mistral 7b mega merge

This model was merged using https://github.com/martyn/safetensors-merge-supermario with hyperparams p=0.12 and lambda=2.1.

The first entry is the base model:

mistralai/Mistral-7B-Instruct-v0.2
uukuguy/speechless-code-mistral-7b-v1.0
AIDC-ai-business/Marcoroni-7B-v3
Weyaxi/Seraph-7B
rwitz/dec10
Intel/neural-chat-7b-v3-3
rwitz/go-bruins-v2

To merge your own model:

python hf_merge.py to_merge_7b.txt mistral_7b_0.2_merge -p 0.12 -lambda 2.1
Downloads last month
2,198
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for martyn/mistral-megamerge-dare-7b

Quantizations
1 model

Spaces using martyn/mistral-megamerge-dare-7b 6

Collection including martyn/mistral-megamerge-dare-7b