metadata
license: apache-2.0
see our paper in: https://arxiv.org/abs/2310.05506
Model Details
MuggleMATH is fully fine-tuned on the AugGSM8K and AugMATH datasets and based on the LLaMA-2 Models.
Model Usage
prompting template: ''' "Below is an instruction that describes a task. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Response:" ''' We recommend using vllm to accelerate inference.
Experiment
GSM8K | MATH | |
---|---|---|
MuggleMATH-7B | 69.8 | 25.8 |
MuggleMATH-13B | 74.3 | 30.7 |
MuggleMATH-70B | 82.5 | 42.1 |
Citation
@misc{li2023query, title={Query and Response Augmentation Cannot Help Out-of-domain Math Reasoning Generalization}, author={Chengpeng Li and Zheng Yuan and Hongyi Yuan and Guanting Dong and Keming Lu and Jiancan Wu and Chuanqi Tan and Xiang Wang and Chang Zhou}, journal={arXiv preprint arXiv: 2310.05506}, year={2023} }