Can I get the error log for my failed model?

#27
by heavytail - opened

Thanks for the great leaderboard.

My model adhered the guideline properly,
and I tested the model in huggingface_hub (downloaded weight) by local lm-evaluation-harness environment.
It goes well in local env, but it has been failing in leaderboard continuously.

Could you please provide me the error log for my model?
I apologize for the inconvenience.

MODEL NAME: heavytail/kullm-polyglot-12.8b-S

upstage org

Hello,

Apologies for the delayed response.
I will forward your request to our PIC.

Regards

Hello,

The Error log is here.

ValueError: The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new

I will modify the models that do not support Flash attention to be evaluated without applying Flash attention.

Thanks

@choco9966 Thank you so much for your kind assistance! It has perfectly cleared up my curiosity.

heavytail changed discussion status to closed
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment