Datasets:
mteb
/

results / README.md
KennethEnevoldsen's picture
fix: Enforce revision ID and model names for future contributions (#56)
af02824 unverified
metadata
benchmark: mteb
type: evaluation
submission_name: MTEB

Previously it was possible to submit models results to MTEB by adding the results to the model metadata. This is no longer an option as we want to ensure high quality metadata.

This repository contain the results of the embedding benchmark evaluated using the package mteb.

Reference
๐Ÿฆพ Leaderboard An up to date leaderboard of embedding models
๐Ÿ“š mteb Guides and instructions on how to use mteb, including running, submitting scores, etc.
๐Ÿ™‹ Questions Questions about the results
๐Ÿ™‹ Issues Issues or bugs you have found