DPO finetune of (https://huggingface.co/abacusai/MM-Orc-Vic-bagel-34b-c1000) on the Bagel DPO dataset

Evaluation Results

Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
Downloads last month
43
Safetensors
Model size
34.4B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train abacusai/MM-OV-bagel-DPO-34b-c1000-250