pingnieuk commited on
Commit
fb9c5c3
1 Parent(s): 761cd5d

add datasets df

Browse files
Files changed (1) hide show
  1. src/display/about.py +13 -96
src/display/about.py CHANGED
@@ -15,6 +15,8 @@ As large language models (LLMs) get better at creating believable texts, address
15
 
16
  # How it works
17
  📈 We evaluate the models on 19 hallucination benchmarks spanning from open-ended to close-ended generation using the <a href="https://github.com/EleutherAI/lm-evaluation-harness" target="_blank"> Eleuther AI Language Model Evaluation Harness </a>, a unified framework to test generative language models on a large number of different evaluation tasks.
 
 
18
 
19
  ### Question Answering
20
  - <a href="https://aclanthology.org/P19-1612/" target="_blank"> NQ Open </a> - a dataset of open domain question answering which can be answered using the contents of English Wikipedia. 64-shot setup.
@@ -54,9 +56,11 @@ As large language models (LLMs) get better at creating believable texts, address
54
  # Reproducibility
55
  To reproduce our results, here is the commands you can run, using [this script](https://huggingface.co/spaces/hallucinations-leaderboard/leaderboard/blob/main/backend-cli.py): python backend-cli.py.
56
 
57
- Alternatively, if you're interested in evaluating a specific task with a particular model, you can use the [EleutherAI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/):
58
- `python main.py --model=hf-causal-experimental --model_args="pretrained=<your_model>,parallelize=True,revision=<your_model_revision>"`
59
- ` --tasks=<task_list> --num_fewshot=<n_few_shot> --batch_size=auto --output_path=<output_path>` (Note that you may need to add tasks from [here](https://huggingface.co/spaces/hallucinations-leaderboard/leaderboard/tree/main/src/backend/tasks) to [this folder](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463/lm_eval/tasks))
 
 
60
 
61
  The tasks and few shots parameters are:
62
 
@@ -86,86 +90,24 @@ For all these evaluations, a higher score is a better score.
86
  - {ModelType.PT.to_str(" : ")} model: new, base models, trained on a given corpora
87
  - {ModelType.FT.to_str(" : ")} model: pretrained models finetuned on more data
88
  Specific fine-tune subcategories (more adapted to chat):
89
- - {ModelType.IFT.to_str(" : ")} model: instruction fine-tunes, which are model fine-tuned specifically on datasets of task instruction
90
- - {ModelType.RL.to_str(" : ")} model: reinforcement fine-tunes, which usually change the model loss a bit with an added policy.
91
  If there is no icon, we have not uploaded the information on the model yet, feel free to open an issue with the model information!
92
  """
93
 
94
  FAQ_TEXT = """
95
  ---------------------------
96
  # FAQ
97
- Below are some common questions - if this FAQ does not answer you, feel free to create a new issue, and we'll take care of it as soon as we can!
98
-
99
  ## 1) Submitting a model
100
- My model requires `trust_remote_code=True`, can I submit it?
101
- - *We only support models that have been integrated in a stable version of the `transformers` library for automatic submission, as we don't want to run possibly unsage code on our cluster.*
102
- What about models of type X?
103
- - *We only support models that have been integrated in a stable version of the `transformers` library for automatic submission.*
104
- How can I follow when my model is launched?
105
- - *You can look for its request file [here](https://huggingface.co/datasets/hallucinations-leaderboard/requests) and follow the status evolution, or directly in the queues above the submit form.*
106
- My model disappeared from all the queues, what happened?
107
- - *A model disappearing from all the queues usually means that there has been a failure. You can check if that is the case by looking for your model [here](https://huggingface.co/datasets/hallucinations-leaderboard/requests).*
108
- What causes an evaluation failure?
109
- - *Most of the failures we get come from problems in the submissions (corrupted files, config problems, wrong parameters selected for eval ...), so we'll be grateful if you first make sure you have followed the steps in `About`. However, from time to time, we have failures on our side (hardware/node failures, problem with an update of our backend, connectivity problem ending up in the results not being saved, ...).*
110
- How can I report an evaluation failure?
111
- - *As we store the logs for all models, feel free to create an issue, **where you link to the requests file of your model** (look for it [here](https://huggingface.co/datasets/hallucinations-leaderboard/requests/tree/main)), so we can investigate! If the model failed due to a problem on our side, we'll relaunch it right away!*
112
- *Note: Please do not re-upload your model under a different name, it will not help*
113
-
114
  ## 2) Model results
115
- What kind of information can I find?
116
- - *Let's imagine you are interested in the Yi-34B results. You have access to 3 different information categories:*
117
- - *The [request file](https://huggingface.co/datasets/hallucinations-leaderboard/requests/blob/main/01-ai/Yi-34B_eval_request_False_bfloat16_Original.json): it gives you information about the status of the evaluation*
118
- - *The [aggregated results folder](https://huggingface.co/datasets/hallucinations-leaderboard/results/tree/main/01-ai/Yi-34B): it gives you aggregated scores, per experimental run*
119
- Why do models appear several times in the leaderboard?
120
- - *We run evaluations with user selected precision and model commit. Sometimes, users submit specific models at different commits and at different precisions (for example, in float16 and 4bit to see how quantization affects performance). You should be able to verify this by displaying the `precision` and `model sha` columns in the display. If, however, you see models appearing several time with the same precision and hash commit, this is not normal.*
121
- What is this concept of "flagging"?
122
- - *This mechanism allows user to report models that have unfair performance on the leaderboard. This contains several categories: exceedingly good results on the leaderboard because the model was (maybe accidentally) trained on the evaluation data, models that are copy of other models not atrributed properly, etc.*
123
- My model has been flagged improperly, what can I do?
124
- - *Every flagged model has a discussion associated with it - feel free to plead your case there, and we'll see what to do together with the community.*
125
-
126
  ## 3) Editing a submission
127
- I upgraded my model and want to re-submit, how can I do that?
128
- - *Please open an issue with the precise name of your model, and we'll remove your model from the leaderboard so you can resubmit. You can also resubmit directly with the new commit hash!*
129
-
130
- ## 4) Other
131
- Why don't you display closed source model scores?
132
- - *This is a leaderboard for Open models, both for philosophical reasons (openness is cool) and for practical reasons: we want to ensure that the results we display are accurate and reproducible, but 1) commercial closed models can change their API thus rendering any scoring at a given time incorrect 2) we re-run everything on our cluster to ensure all models are run on the same setup and you can't do that for these models.*
133
- I have an issue about accessing the leaderboard through the Gradio API
134
- - *Since this is not the recommended way to access the leaderboard, we won't provide support for this, but you can look at tools provided by the community for inspiration!*
135
  """
136
 
137
  EVALUATION_QUEUE_TEXT = """
138
- # Evaluation Queue for the Hallucinations Leaderboard
139
- Models added here will be automatically evaluated on the EIDF cluster.
140
-
141
- ## First steps before submitting a model
142
- ### 1) Make sure you can load your model and tokenizer using AutoClasses:
143
- ```python
144
- from transformers import AutoConfig, AutoModel, AutoTokenizer
145
- config = AutoConfig.from_pretrained("your model name", revision=revision)
146
- model = AutoModel.from_pretrained("your model name", revision=revision)
147
- tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision)
148
- ```
149
- If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded.
150
- Note: make sure your model is public!
151
- Note: if your model needs `use_remote_code=True`, we do not support this option yet but we are working on adding it, stay posted!
152
-
153
- ### 2) Convert your model weights to [safetensors](https://huggingface.co/docs/safetensors/index)
154
- It's a new format for storing weights which is safer and faster to load and use. It will also allow us to add the number of parameters of your model to the `Extended Viewer`!
155
-
156
- ### 3) Make sure your model has an open license!
157
- This is a leaderboard for Open LLMs, and we'd love for as many people as possible to know they can use your model 🤗
158
-
159
- ### 4) Fill up your model card
160
- When we add extra information about models to the leaderboard, it will be automatically taken from the model card
161
-
162
- ### 5) Select the correct precision
163
- Not all models are converted properly from `float16` to `bfloat16`, and selecting the wrong precision can sometimes cause evaluation error (as loading a `bf16` model in `fp16` can sometimes generate NaNs, depending on the weight range).
164
-
165
- ## In case of model failure
166
- If your model is displayed in the `FAILED` category, its execution stopped.
167
- Make sure you have followed the above steps first.
168
- If everything is done, check you can launch the EleutherAIHarness on your model locally, using the command in the About tab under "Reproducibility" with all arguments specified (you can add `--limit` to limit the number of examples per task).
169
  """
170
 
171
  CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
@@ -177,29 +119,4 @@ CITATION_BUTTON_TEXT = r"""
177
  publisher = {Hugging Face},
178
  howpublished = "\url{https://huggingface.co/spaces/hallucinations-leaderboard/leaderboard}"
179
  }
180
-
181
- @software{eval-harness,
182
- author = {Gao, Leo and
183
- Tow, Jonathan and
184
- Biderman, Stella and
185
- Black, Sid and
186
- DiPofi, Anthony and
187
- Foster, Charles and
188
- Golding, Laurence and
189
- Hsu, Jeffrey and
190
- McDonell, Kyle and
191
- Muennighoff, Niklas and
192
- Phang, Jason and
193
- Reynolds, Laria and
194
- Tang, Eric and
195
- Thite, Anish and
196
- Wang, Ben and
197
- Wang, Kevin and
198
- Zou, Andy},
199
- title = {A framework for few-shot language model evaluation},
200
- month = sep,
201
- year = 2021,
202
- publisher = {Zenodo},
203
- version = {v0.0.1},
204
- doi = {10.
205
  """
 
15
 
16
  # How it works
17
  📈 We evaluate the models on 19 hallucination benchmarks spanning from open-ended to close-ended generation using the <a href="https://github.com/EleutherAI/lm-evaluation-harness" target="_blank"> Eleuther AI Language Model Evaluation Harness </a>, a unified framework to test generative language models on a large number of different evaluation tasks.
18
+ """
19
+ LLM_BENCHMARKS_DETAILS = f"""
20
 
21
  ### Question Answering
22
  - <a href="https://aclanthology.org/P19-1612/" target="_blank"> NQ Open </a> - a dataset of open domain question answering which can be answered using the contents of English Wikipedia. 64-shot setup.
 
56
  # Reproducibility
57
  To reproduce our results, here is the commands you can run, using [this script](https://huggingface.co/spaces/hallucinations-leaderboard/leaderboard/blob/main/backend-cli.py): python backend-cli.py.
58
 
59
+ Alternatively, if you're interested in evaluating a specific task with a particular model, you can use [this script](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463) of the Eleuther AI Harness:
60
+ `python main.py --model=hf-causal-experimental --model_args="pretrained=<your_model>,revision=<your_model_revision>"`
61
+ ` --tasks=<task_list> --num_fewshot=<n_few_shot> --batch_size=1 --output_path=<output_path>` (Note that you may need to add tasks from [here](https://huggingface.co/spaces/hallucinations-leaderboard/leaderboard/tree/main/src/backend/tasks) to [this folder](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463/lm_eval/tasks))
62
+
63
+ The total batch size we get for models which fit on one A100 node is 8 (8 GPUs * 1). If you don't use parallelism, adapt your batch size to fit. You can expect results to vary slightly for different batch sizes because of padding.
64
 
65
  The tasks and few shots parameters are:
66
 
 
90
  - {ModelType.PT.to_str(" : ")} model: new, base models, trained on a given corpora
91
  - {ModelType.FT.to_str(" : ")} model: pretrained models finetuned on more data
92
  Specific fine-tune subcategories (more adapted to chat):
93
+ - {ModelType.IFT.to_str(" : ")} model: instruction fine-tunes, which are model fine-tuned specifically on datasets of task instruction
94
+ - {ModelType.RL.to_str(" : ")} model: reinforcement fine-tunes, which usually change the model loss a bit with an added policy.
95
  If there is no icon, we have not uploaded the information on the model yet, feel free to open an issue with the model information!
96
  """
97
 
98
  FAQ_TEXT = """
99
  ---------------------------
100
  # FAQ
 
 
101
  ## 1) Submitting a model
102
+ XXX
 
 
 
 
 
 
 
 
 
 
 
 
 
103
  ## 2) Model results
104
+ XXX
 
 
 
 
 
 
 
 
 
 
105
  ## 3) Editing a submission
106
+ XXX
 
 
 
 
 
 
 
107
  """
108
 
109
  EVALUATION_QUEUE_TEXT = """
110
+ XXX
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
111
  """
112
 
113
  CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
 
119
  publisher = {Hugging Face},
120
  howpublished = "\url{https://huggingface.co/spaces/hallucinations-leaderboard/leaderboard}"
121
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
122
  """