lvkaokao commited on
Commit
8c85cda
1 Parent(s): 4686e27

update description.

Browse files
Files changed (1) hide show
  1. src/display/about.py +5 -3
src/display/about.py CHANGED
@@ -99,7 +99,7 @@ My model disappeared from all the queues, what happened?
99
  - *A model disappearing from all the queues usually means that there has been a failure. You can check if that is the case by looking for your model [here](https://huggingface.co/datasets/Intel/ld_requests).*
100
 
101
  What causes an evaluation failure?
102
- - *Most of the failures we get come from problems in the submissions (corrupted files, config problems, wrong parameters selected for eval ...), so we'll be grateful if you first make sure you have followed the steps in `About`. And some quantized models can't be evaluated with some issues (runtime error which requires manual checking (like [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ)...), so you can push your questions to the [Community Pages](https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard/discussions). However, from time to time, we have failures on our side (hardware/node failures, problem with an update of our backend, connectivity problem ending up in the results not being saved, ...).*
103
 
104
  How can I report an evaluation failure?
105
  - *As we store the logs for all models, feel free to create an issue, **where you link to the requests file of your model** (look for it [here](https://huggingface.co/datasets/Intel/ld_requests)), so we can investigate! If the model failed due to a problem on our side, we'll relaunch it right away!*
@@ -118,7 +118,7 @@ Why do models appear several times in the leaderboard?
118
  - *We run evaluations with user selected precision and model commit. Sometimes, users submit specific models at different commits and at different precisions (for example, in float16 and 4bit to see how quantization affects performance). You should be able to verify this by displaying the `precision` and `model sha` columns in the display. If, however, you see models appearing several time with the same precision and hash commit, this is not normal.*
119
 
120
  Why are the llama series models marked with *?
121
- - *We are evaluating llama.cpp models with `lm-eval` [code](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/models/gguf.py) , and we find that some tasks results are abnormal even though we make some modifications. So we mark them and will verify them further.
122
 
123
  ---------------------------
124
 
@@ -160,7 +160,9 @@ Note: make sure your model is public!
160
  Note: if your model needs `use_remote_code=True`, we do not support this option yet but we are working on adding it, stay posted!
161
 
162
  ### 2) Confirm your model weights format!
163
- Your model weights format should be [safetensors](https://huggingface.co/docs/safetensors/index), and if your model is llama.cpp, it should be end of `Q4_0.gguf` because we only support this currently.
 
 
164
 
165
  ### 3) Confirm your model config!
166
  If your model is one of the following quantization types: `AutoRound`, `GPTQ`, `AWQ`, `bitsandbytes`, there should be a `quantization_config` in your model config, like [this](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ/blob/main/config.json#L28).
 
99
  - *A model disappearing from all the queues usually means that there has been a failure. You can check if that is the case by looking for your model [here](https://huggingface.co/datasets/Intel/ld_requests).*
100
 
101
  What causes an evaluation failure?
102
+ - *Most of the failures we get come from problems in the submissions (corrupted files, config problems, wrong parameters selected for eval ...), and some quantized models can't be evaluated with some issues (runtime error which requires manual checking (like [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ)...). So we'll be grateful if you first make sure you have followed the steps in `About` and if still not resolved, please push your questions to the [Community Pages](https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard/discussions). However, from time to time, we have failures on our side (hardware/node failures, problem with an update of our backend, connectivity problem ending up in the results not being saved, ...).*
103
 
104
  How can I report an evaluation failure?
105
  - *As we store the logs for all models, feel free to create an issue, **where you link to the requests file of your model** (look for it [here](https://huggingface.co/datasets/Intel/ld_requests)), so we can investigate! If the model failed due to a problem on our side, we'll relaunch it right away!*
 
118
  - *We run evaluations with user selected precision and model commit. Sometimes, users submit specific models at different commits and at different precisions (for example, in float16 and 4bit to see how quantization affects performance). You should be able to verify this by displaying the `precision` and `model sha` columns in the display. If, however, you see models appearing several time with the same precision and hash commit, this is not normal.*
119
 
120
  Why are the llama series models marked with *?
121
+ - *We are evaluating llama.cpp models with `lm-eval` [code](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/models/gguf.py) , and we find that some tasks results are abnormal even though we make some modifications. So we mark them and will verify them further.*
122
 
123
  ---------------------------
124
 
 
160
  Note: if your model needs `use_remote_code=True`, we do not support this option yet but we are working on adding it, stay posted!
161
 
162
  ### 2) Confirm your model weights format!
163
+ Your model weights format should be [safetensors](https://huggingface.co/docs/safetensors/index), because it's a new format for storing weights which is safer and faster to load and use. And It will help us to compute the number of parameters and the size of memory of your model to the Extended Viewer easily!
164
+
165
+ If your model is llama.cpp, it should be end of `Q4_0.gguf` because we only support this format currently.
166
 
167
  ### 3) Confirm your model config!
168
  If your model is one of the following quantization types: `AutoRound`, `GPTQ`, `AWQ`, `bitsandbytes`, there should be a `quantization_config` in your model config, like [this](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GPTQ/blob/main/config.json#L28).