Seraph-7B / README.md
Weyaxi's picture
match starling license
455d393
|
raw
history blame
1.66 kB
metadata
license: cc-by-nc-4.0

image/png

Seraph-7B

This is the model for Seraph-7B. I used mergekit to merge models.

Prompt Templates

You can use these prompt templates, but I recommend using ChatML.

ChatML:

<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>

System, User, Asistant Alpaca Style:

### System:
{system}
### User:
{user}
### Assistant:

Yaml Config

slices:
  - sources:
      - model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
        layer_range: [0, 32]
      - model: Q-bert/MetaMath-Cybertron-Starling
        layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: bfloat16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 71.86
ARC (25-shot) 67.83
HellaSwag (10-shot) 86.22
MMLU (5-shot) 65.07
TruthfulQA (0-shot) 59.49
Winogrande (5-shot) 80.66
GSM8K (5-shot) 71.87