The Russian-focused embedders' exploration: ruMTEB benchmark and Russian embedding model design
Abstract
Embedding models play a crucial role in Natural Language Processing (NLP) by creating text embeddings used in various tasks such as information retrieval and assessing semantic text similarity. This paper focuses on research related to embedding models in the Russian language. It introduces a new Russian-focused embedding model called ru-en-RoSBERTa and the ruMTEB benchmark, the Russian version extending the Massive Text Embedding Benchmark (MTEB). Our benchmark includes seven categories of tasks, such as semantic textual similarity, text classification, reranking, and retrieval. The research also assesses a representative set of Russian and multilingual models on the proposed benchmark. The findings indicate that the new model achieves results that are on par with state-of-the-art models in Russian. We release the model ru-en-RoSBERTa, and the ruMTEB framework comes with open-source code, integration into the original framework and a public leaderboard.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval (2024)
- NV-Retriever: Improving text embedding models with effective hard-negative mining (2024)
- Improving Text Embeddings for Smaller Language Models Using Contrastive Fine-tuning (2024)
- D2LLM: Decomposed and Distilled Large Language Models for Semantic Search (2024)
- UMBCLU at SemEval-2024 Task 1: Semantic Textual Relatedness with and without machine translation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper