Edit model card

ReasonEval-34B Model Card

Model Description

ReasonEval-34B is a 34B parameter decoder-only language model fine-tuned from llemma_34b. Given a mathematical problem and the solution, ReasonEval-34B assesses the problem-solving process in a step-by-step format from the following perspectives:

  • Validity: The step contains no mistakes in calculation and logic.
  • Redundancy: The step lacks utility in solving the problem but is still valid.

With ReasonEval, you can

  • ๐Ÿ“ quantify the quality of reasoning steps free of human or close-source models.

  • ๐Ÿค– find the potential invalid or redundant steps in the solutions even with the correct results.

  • ๐Ÿ› ๏ธ select high-quality training data for downstream tasks (e.g., fine-tuning).

Model Details

For detailed instructions on how to use the ReasonEval-34B model, visit our GitHub repository at https://github.com/GAIR-NLP/ReasonEval.

How to Cite

@article{xia2024evaluating,
        title={Evaluating Mathematical Reasoning Beyond Accuracy}, 
        author={Xia, Shijie and Li, Xuefeng and Liu, Yixin and Wu, Tongshuang and Liu, Pengfei},
        journal={arXiv preprint arXiv:2404.05692},
        year={2024},
}
Downloads last month
3
Safetensors
Model size
33.5B params
Tensor type
F32
ยท
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.