YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Usage:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

question_template = "<|im_start|>user\nMy LEAN 4 state is:\n```{state}```\nPlease write down the reasoning that leads to the possible next tactic and then predict the tactic to help me prove the theorem.<|im_end|>\n<|im_start|>assistant\n"

model_name = "ScalableMath/Lean-STaR-plus"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")

tokenizer = AutoTokenizer.from_pretrained(model_name)

state = "x : \u211d\nn : \u2115\nh\u2080 : -1 < x\nh\u2081 : 0 < n\n\u22a2 1 + \u2191n * x \u2264 (1 + x) ^ n"
question = question_template.format(state=state)

input_tensor = torch.tensor([tokenizer.encode(question)])
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=500)

result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)

Example Results:

# State

x : ℝ
n : ℕ
h₀ : -1 < x
h₁ : 0 < n
⊢ 1 + ↑n * x ≤ (1 + x) ^ n

# Reasoning

To prove the inequality involving the binomial expansion of `(1 + x)^n`, we start by considering the binomial expansion of `1 + x` raised to the power `n`. This expansion will allow us to compare the left-hand side and the right-hand side of the inequality.

# Next Tactic

have h₂ : x = -1 + (x + 1) := by simp
Downloads last month
51
Safetensors
Model size
7.74B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Collection including ScalableMath/Lean-STaR-plus