Introduction
The model is trained with Masked Thought Fine-Tuning (MFT), a simple variant of standard Supervised Fine-Tuning (SFT). You can refer to our code and paper below.
Links
Results
We test it with the scripts provided in MetaMath.
Model | GSM8K | MATH |
---|---|---|
adalaw/MetaMath-Mistral-7B-MFT | 79.90 | 29.0 |
meta-math/MetaMath-Mistral-7B-SFT | 77.70 | 28.2 |
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.