Haihao commited on
Commit
3cdf958
1 Parent(s): 12d1200

Update src/display/about.py

Browse files
Files changed (1) hide show
  1. src/display/about.py +5 -6
src/display/about.py CHANGED
@@ -38,7 +38,7 @@ We chose these benchmarks as they test a variety of reasoning and general knowle
38
 
39
  ---------------------------
40
 
41
- ## REPRODUCIBILITY
42
  To reproduce our results, here is the commands you can run, using [v0.4.2](https://github.com/EleutherAI/lm-evaluation-harness/tree/v0.4.2) of the Eleuther AI Harness:
43
  ```
44
  python main.py --model=hf-causal-experimental
@@ -71,7 +71,7 @@ Side note on the baseline scores:
71
 
72
  ---------------------------
73
 
74
- ## RESSOURCES
75
 
76
  ### Quantization
77
  To get more information about quantization, see:
@@ -79,10 +79,9 @@ To get more information about quantization, see:
79
  - 4 bits: [blog post](https://huggingface.co/blog/4bit-transformers-bitsandbytes), [paper](https://arxiv.org/abs/2305.14314)
80
 
81
  ### Other cool leaderboards:
 
82
  - [LLM safety](https://huggingface.co/spaces/AI-Secure/llm-trustworthy-leaderboard)
83
  - [LLM performance](https://huggingface.co/spaces/optimum/llm-perf-leaderboard)
84
- - [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
85
-
86
 
87
  """
88
 
@@ -90,7 +89,7 @@ FAQ_TEXT = """
90
 
91
  ## SUBMISSIONS
92
  My model requires `trust_remote_code=True`, can I submit it?
93
- - *Yes, we want to support the newest models.*
94
 
95
  How can I follow when my model is launched?
96
  - *You can look for its request file [here](https://huggingface.co/datasets/Intel/ld_requests) and follow the status evolution, or directly in the queues above the submit form.*
@@ -182,7 +181,7 @@ The compute dtype will pass to `lm-eval` for the inference. Currently, we suppor
182
  CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
183
  CITATION_BUTTON_TEXT = r"""
184
  @software{auto-round,
185
- title = Intel® AutoRound,
186
  publisher = {Intel},
187
  url = {https://github.com/intel/auto-round}
188
  }
 
38
 
39
  ---------------------------
40
 
41
+ ## RERODUCIBILITY
42
  To reproduce our results, here is the commands you can run, using [v0.4.2](https://github.com/EleutherAI/lm-evaluation-harness/tree/v0.4.2) of the Eleuther AI Harness:
43
  ```
44
  python main.py --model=hf-causal-experimental
 
71
 
72
  ---------------------------
73
 
74
+ ## RESOURCES
75
 
76
  ### Quantization
77
  To get more information about quantization, see:
 
79
  - 4 bits: [blog post](https://huggingface.co/blog/4bit-transformers-bitsandbytes), [paper](https://arxiv.org/abs/2305.14314)
80
 
81
  ### Other cool leaderboards:
82
+ - [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
83
  - [LLM safety](https://huggingface.co/spaces/AI-Secure/llm-trustworthy-leaderboard)
84
  - [LLM performance](https://huggingface.co/spaces/optimum/llm-perf-leaderboard)
 
 
85
 
86
  """
87
 
 
89
 
90
  ## SUBMISSIONS
91
  My model requires `trust_remote_code=True`, can I submit it?
92
+ - *Yes, the leaderboard supports the model with `trust_remote_code=True` well.*
93
 
94
  How can I follow when my model is launched?
95
  - *You can look for its request file [here](https://huggingface.co/datasets/Intel/ld_requests) and follow the status evolution, or directly in the queues above the submit form.*
 
181
  CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
182
  CITATION_BUTTON_TEXT = r"""
183
  @software{auto-round,
184
+ title = AutoRound,
185
  publisher = {Intel},
186
  url = {https://github.com/intel/auto-round}
187
  }