Rethinking Autoformalization
Collection
Models for "Rethinking and Improving Autoformalization: Towards a Faithful Metric and a Dependency Retrieval-based Approach"
β’
10 items
β’
Updated
Please refer to the πΊGitHub repo and πPaper for more details.
Bench | Fmt | Method | Recall@5 | Precision@5 |
---|---|---|---|---|
ProofNet | F | BM25 | 0.16% | 0.11% |
F | DR | 35.52% | 22.89% | |
F+IF | BM25 | 0.00% | 0.00% | |
F+IF | DR | 32.47% | 20.32% | |
Con-NF | F | BM25 | 4.41% | 2.37% |
F | DR | 24.32% | 14.05% | |
F+IF | BM25 | 9.86% | 6.95% | |
F+IF | DR | 27.91% | 17.57% |
purewhite42/dependency_retriever_f
: Dense dependency retriever whose inputs are formatted using only formal declarations, SFTed from π€BAAI/bge-m3
purewhite42/dependency_retriever_f_if
: Dense dependency retriever whose inputs are formatted using both formal declarations and informal descriptions, SFTed from π€BAAI/bge-m3
If you find our work useful in your research, please cite
@inproceedings{
liu2025rethinking,
title={Rethinking and improving autoformalization: towards a faithful metric and a Dependency Retrieval-based approach},
author={Qi Liu and Xinhao Zheng and Xudong Lu and Qinxiang Cao and Junchi Yan},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=hUb2At2DsQ}
}
This project is released under the Apache 2.0 license. Please see the LICENSE file for more information.
Feel free to discuss the paper/data/code with us through issues/emails!
Base model
BAAI/bge-m3