Edit model card

WestOrcaNeuralMarco-DPO-v2-DARETIES-7B

WestOrcaNeuralMarco-DPO-v2-DARETIES-7B is a merge of the following models using LazyMergekit:

🧩 Configuration

models:
  - model: mistralai/Mistral-7B-v0.1
    # No parameters necessary for base model
  - model: senseable/Westlake-7B-v2
    parameters:
      density: 0.73
      weight: 0.4
  - model: decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
    parameters:
      density: 0.55
      weight: 0.3
  - model: mlabonne/NeuralMarcoro14-7B  
    parameters:
      density: 0.45
      weight: 0.3  
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  int8_mask: true
dtype: bfloat16

Credit to Maxime Labonne and his excellent blog https://mlabonne.github.io/blog/.

Downloads last month
28
GGUF
Model size
7.24B params
Architecture
llama

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for jsfs11/WestOrcaNeuralMarco-DPO-v2-DARETIES-7B-GGUF