image/png

Mistral-Small-Drummer-22B

mistralai/Mistral-Small-Instruct-2409 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.

Method

ORPO tuned with 2xA40 on RunPod for 1 epoch.

learning_rate=4e-6,
lr_scheduler_type="linear",
beta=0.1,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=8,
optim="paged_adamw_8bit",
num_train_epochs=1,

Dataset was prepared using Mistral-Small Instruct format.

Fine-tune Llama 3 with ORPO

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 29.45
IFEval (0-Shot) 63.31
BBH (3-Shot) 40.12
MATH Lvl 5 (4-Shot) 16.69
GPQA (0-shot) 12.42
MuSR (0-shot) 9.80
MMLU-PRO (5-shot) 34.39
Downloads last month
129
Safetensors
Model size
22.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nbeerbower/Mistral-Small-Drummer-22B

Finetuned
(22)
this model
Merges
3 models
Quantizations
9 models

Datasets used to train nbeerbower/Mistral-Small-Drummer-22B

Evaluation results