Description

This repo contains bf16 files of Lonepino-11B. Just a normal model.

Model used

The secret sauce

neural-maid-11B:

slices:
  - sources:
    - model: Intel/neural-chat-7b-v3-3-Slerp
      layer_range: [0, 24]
  - sources:
    - model: NeverSleep/Noromaid-7b-v0.2
      layer_range: [8, 32]

merge_method: passthrough
dtype: bfloat16

loyal-PiVoT-11B:

slices:
  - sources:
    - model: chargoddard/loyal-piano-m7-cdpo
      layer_range: [0, 24]
  - sources:
    - model: maywell/PiVoT-0.1-Starling-LM-RP
      layer_range: [8, 32]

merge_method: passthrough
dtype: bfloat16

Lonepino-11B:

slices:
  - sources:
      - model: "./neural-maid-11B"
        layer_range: [0, 48]
      - model: "./loyal-PiVoT-11B"
        layer_range: [0, 48]
merge_method: slerp
base_model: "./neural-maid-11B"
parameters:
  t:
    - value: 0.4
dtype: bfloat16

Prompt template

Alpaca. Or chatml. Or any you like.

=w=

I use mergekit for all the manipulation told here.

Thanks to the Undi95 for the original 11B mistral merge recipe.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 70.10
AI2 Reasoning Challenge (25-Shot) 68.26
HellaSwag (10-Shot) 84.57
MMLU (5-Shot) 63.76
TruthfulQA (0-shot) 63.45
Winogrande (5-shot) 78.93
GSM8k (5-shot) 61.64
Downloads last month
1,236
Safetensors
Model size
10.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for beberik/Lonepino-11B

Quantizations
2 models

Evaluation results