Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
Libraries:
Datasets
Dask
License:

Model failed evail - why?

#53
by CorticalStack - opened

Hi,

Submitted model for eval, looks like it failed. First submittion I subbited with bfloat16 setting:
https://huggingface.co/datasets/open-llm-leaderboard/requests/blob/main/CorticalStack/OpenHermes-Mistral-7B-GPTQ_eval_request_False_bfloat16_Original.json

Thought this was the issue to resubmitted with float16:
https://huggingface.co/datasets/open-llm-leaderboard/requests/blob/main/CorticalStack/OpenHermes-Mistral-7B-GPTQ_eval_request_False_float16_Original.json

Still failed. Have no idea on how to root cause the problem or find further logging detail. Can anyone support please?

Going forward, can these submission runs be cleared so I can submit for eval again once the problem is known?

Many thanks!

@clefourrier any ideas in logs available to HF staff and not to me?

Open LLM Leaderboard Archive org

Hi! Please open an issue on the Open LLM Leaderboard next time, we very rarely check the discussions in these repos.

Thanks for pointing out to the request files!
There seems to be a problem of typing in your model, it fails with the error

File "...lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::BFloat16

Are you sure your quantization is correct?

clefourrier changed discussion status to closed

Sign up or log in to comment