Transformers
Eval Results
Inference Endpoints

exl2 quant (measurement.json in main branch)


check revisions for quants


image/png

Mistral-Small-Drummer-22B

mistralai/Mistral-Small-Instruct-2409 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.

Method

ORPO tuned with 2xA40 on RunPod for 1 epoch.

learning_rate=4e-6,
lr_scheduler_type="linear",
beta=0.1,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=8,
optim="paged_adamw_8bit",
num_train_epochs=1,

Dataset was prepared using Mistral-Small Instruct format.

Fine-tune Llama 3 with ORPO

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 29.45
IFEval (0-Shot) 63.31
BBH (3-Shot) 40.12
MATH Lvl 5 (4-Shot) 16.69
GPQA (0-shot) 12.42
MuSR (0-shot) 9.80
MMLU-PRO (5-shot) 34.39
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for NewEden/nbeerbower_Mistral-Small-Drummer-22B-exl2

Finetuned
(23)
this model

Datasets used to train NewEden/nbeerbower_Mistral-Small-Drummer-22B-exl2

Evaluation results