Datasets:

Languages:
English
Size:
n<1K
DOI:
Libraries:
License:
File size: 3,225 Bytes
2ae405d
 
 
 
 
 
 
 
7ec34a9
 
 
 
 
2ae405d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cf82b1c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: bsd-3-clause
language:
- en
tags:
- croissant
size_categories:
- n<1K
configs:
  - config_name: default
    data_files:
      - split: test
        path: codeprops_bench_ps.jsonl
---
# Getting Started
First install [Lean 4](https://leanprover-community.github.io/get_started.html). Then clone this repo:

`git clone --recurse-submodules https://huggingface.co/datasets/elohn/miniCodeProps` 

The outer LeanSrc folder is a [Lean Project](https://leanprover-community.github.io/install/project.html). You can open that folder directly in VSCode and check that the proofs in `LeanSrc/Sorts.lean` type check after following the instructions for working on an existing lean project in the Lean 4 documentation.
The main miniCodeProps folder handles extracting the benchmark and calculating baselines. If anything fails when building Lean or running `lake exe cache get` from LeanSrc, the [Zulip Chat](https://leanprover.zulipchat.com/) is the best resource for troubleshooting.

After cloning the repo, you will need to install [Lean REPL](https://github.com/leanprover-community/repl). By default, our scripts expect the `repl` folder to be directly inside the miniCodeProps folder. run    `lake build`    from within the `repl` folder.

The `extract.py` script is used only to create the json-formatted benchmark.

The `baseline.py` script contains the code we used to get our baseline results. It shows how to interact with Lean Repl programmatically, although some interactions are still somewhat buggy in that the repl will send i.e. an extra newline or weirdly formatted message that requires our script to restart the repl.
Regardless, if you would like to use our setup, We ran our baselines using [LLMStep](https://github.com/wellecks/llmstep). However, our code also includes a natural place to write your own function to generate tactics given the goal and file context (see `get_tactics_llmstep` in `baseline.py`). We [modified the LLMStep server](https://github.com/evanlohn/llmstep) to return average suggestion log-probabilities per suggestion to implement best-first search.

# Reproducing Baselines

First, ensure that you have installed Lean and Lean REPL as detailed above. Before running `baseline.py` with any arguments, check that your OS has been set at the top of `utils.py`. At the moment we support interacting with Lean in MacOS and Ubuntu (20.04).

## Next-Step Baselines
Our experiments were run on an A100 GPU. Smaller GPUs may not be able to run Llemma7B, but will likely work with Pythia and ntp-context.

Clone [our fork of LLMStep](https://github.com/evanlohn/llmstep). After following the LLMStep setup instructions, 
- For Pythia2.8B, run `python3 python/server_vllm.py` (or, if CPU-bound, run `python3 python/server.py`)
- For Llemma7B, run `python3 python/server_llemma.py`
- For ntp-context-1.3B, run `python3 python/server_context.py`

In another terminal, run `python baseline.py --bench_type nextstep`

## Full-Proof Baseline
run `export OPENAI_API_KEY=<your key here>`. 
Then, simply run 
`python3 baseline.py`
You can also specify which openai LLM to use for proof generation via
`python3 baseline.py --gpt_model <your model name>`
although our tests only used gpt-4-turbo.