Edit model card

Finch 7B Merge

A SLERP merge of two powerful 7B language models

Finch GIF

Description

Finch is a 7B language model created by merging macadeliccc/WestLake-7B-v2-laser-truthy-dpo and SanjiWatsuki/Kunoichi-DPO-v2-7B using the SLERP method.

Quantized Models

Quantized versions of Finch are available:

Recommended Settings

For best results, use the ChatML format with the following sampler settings:

Temperature: 1.2 Min P: 0.2 Smoothing Factor: 0.2

Mergekit Configuration

base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo dtype: float16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 32] model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo - layer_range: [0, 32] model: SanjiWatsuki/Kunoichi-DPO-v2-7B

Evaluation Results

Finch's performance on the Open LLM Leaderboard:

MetricValue
Avg.73.78
AI2 Reasoning Challenge (25-Shot)71.59
HellaSwag (10-Shot)87.87
MMLU (5-Shot)64.81
TruthfulQA (0-shot)67.96
Winogrande (5-shot)84.14
GSM8k (5-shot)66.34

Detailed results: https://huggingface.co/datasets/open-llm-leaderboard/details_antiven0m__finch

Downloads last month
74
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for antiven0m/finch

Collection including antiven0m/finch

Evaluation results