New GTE model, Apply for refreshing the results

#126
by thenlper - opened

We submitted a new model, Alibaba-NLP/gte-Qwen2-7B-instruct, training based on the latest opensource Qwen2-7B LLM. Could you please refresh the space?

Thanks!

@tomaarsen @Muennighoff

Massive Text Embedding Benchmark org

Updated; congrats - very impressive 🙌🙌🙌

Massive Text Embedding Benchmark org

Big congratulations! I quite enjoy the Qwen2 like of models, and their power is once again visible here with the large jump in performance without any change in training strategy between 1.5 and 2: 67.34 -> 70.24 for English and 69.56 -> 72.05 for Chinese (and equivalently large jumps for each of the tasks, e.g. Retrieval).
I also quite enjoy that I can use it out of the box with Sentence Transformers/LangChain/LlamaIndex etc., well done!

  • Tom Aarsen

Updated; congrats - very impressive 🙌🙌🙌

Hi,the dimension of gte-Qwen2-7B-instruct is 3584,but is shown as 4096 for MTEB/C-MTEB,where to modify it?

Massive Text Embedding Benchmark org

It is fetched from here: https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct/blob/main/1_Pooling/config.json#L2
If you fix it there, it will be fixed in the LB

Sign up or log in to comment