Edit model card

Introduction

The model is trained with Masked Thought Fine-Tuning (MFT), a simple variant of standard Supervised Fine-Tuning (SFT). You can refer to our code and paper below.

Links

Results

We test it with the Hybrid decoding scripts provided in MAmmoTH.

Model GSM8K MATH
adalaw/MAmmoTH-7B-Mistral-MFT 77.10 41.2
TIGER-Lab/MAmmoTH-7B-Mistral-SFT 75.00 40.0
Downloads last month
6
Safetensors
Model size
7.24B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for adalaw/MAmmoTH-7B-Mistral-MFT

Quantizations
1 model

Dataset used to train adalaw/MAmmoTH-7B-Mistral-MFT