This is a CodeLlama-7b model fine-tuned on [LeanDojo Benchmark](https://github.com/lean-dojo/LeanDojo) | |
The performance is comparable with the paper [Temperature-scaled large language models for Lean proofstep prediction](https://mathai2023.github.io/papers/25.pdf) with around 57.7 pass@1 accuracy on LeanDojo "random" benchmark. | |
The training lines are formatted as follow: | |
``` | |
GOAL $(tactic state) PROOFSTEP $(tactic) | |
``` | |
For inference, use the following line: | |
``` | |
GOAL $(tactic state) PROOFSTEP | |
``` |