lfqian commited on
Commit
124b4e4
·
verified ·
1 Parent(s): 5aa9495

Update src/about.py

Browse files
Files changed (1) hide show
  1. src/about.py +2 -3
src/about.py CHANGED
@@ -31,9 +31,8 @@ The Fino1 Leaderboard evaluates the performance of various LLMs, including gener
31
  # Which evaluations are you running? how can people reproduce what you have?
32
  LLM_BENCHMARKS_TEXT = f"""
33
  ## How it works
34
-
35
- ## Reproducibility
36
- To reproduce our results, here is the commands you can run:
37
 
38
  """
39
 
 
31
  # Which evaluations are you running? how can people reproduce what you have?
32
  LLM_BENCHMARKS_TEXT = f"""
33
  ## How it works
34
+ We used the framework from https://github.com/lfqian/FinBen to do the inference.
35
+ And evaluation method from https://github.com/yale-nlp/DocMath-Eval are used to evaluate the performance of all models.
 
36
 
37
  """
38