That didn't take long! Nomic AI has finetuned the new ModernBERT-base encoder model into a strong embedding model for search, classification, clustering and more!
Details: š¤ Based on ModernBERT-base with 149M parameters. š Outperforms both nomic-embed-text-v1 and nomic-embed-text-v1.5 on MTEB! šļø Immediate FA2 and unpacking support for super efficient inference. šŖ Trained with Matryoshka support, i.e. 2 valid output dimensionalities: 768 and 256. ā”ļø Maximum sequence length of 8192 tokens! 2ļøā£ Trained in 2 stages: unsupervised contrastive data -> high quality labeled datasets. ā Integrated in Sentence Transformers, Transformers, LangChain, LlamaIndex, Haystack, etc. šļø Apache 2.0 licensed: fully commercially permissible