Edit model card

Update @ 2024.03.13

T3Q-Mistral-Orca-Math-DPO

This model is a DPO fine-tuned version of liminerity/M7-7b

Model Developers Chihoon Lee(chlee10), T3Q

T3Q-Mistral-Orca-Math-DPO

This model is a DPO fine-tuned version of liminerity/M7-7b

Model Developers Chihoon Lee(chlee10), T3Q

T3Q-Mistral-Orca-Math-DPO

This model is a DPO fine-tuned version of liminerity/M7-7b

Model Developers Chihoon Lee(chlee10), T3Q

Downloads last month
221
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Finetuned from

Dataset used to train chihoonlee10/T3Q-Mistral-Orca-Math-DPO