SeanLee97 KennethEnevoldsen commited on
Commit
c12f40f
1 Parent(s): 7e639ca

minor fix (#7)

Browse files

- minor fix (26d43d3669b23f8287ab79ff8a5a26f6d9dfc571)


Co-authored-by: Kenneth C. Enevoldsen <KennethEnevoldsen@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -2618,7 +2618,7 @@ library_name: sentence-transformers
2618
 
2619
  This is our [2DMSE](https://arxiv.org/abs/2402.14776) sentence embedding model. It supports the adaptive transformer layer and embedding size. Find out more in our [blog post](https://mixedbread.ai/blog/mxbai-embed-2d-large-v1).
2620
 
2621
- TLDR: TLDR: 2D-🪆 allows you to shrink the model and the embeddings layer. Shrinking only the embeddings model yields competetive results to other models like [nomics embeddings model](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5). Shrinking the model to ~50% maintains upto 85% of the performance without further training.
2622
 
2623
  ## Quickstart
2624
 
 
2618
 
2619
  This is our [2DMSE](https://arxiv.org/abs/2402.14776) sentence embedding model. It supports the adaptive transformer layer and embedding size. Find out more in our [blog post](https://mixedbread.ai/blog/mxbai-embed-2d-large-v1).
2620
 
2621
+ TLDR: 2D-🪆 allows you to shrink the model and the embeddings layer. Shrinking only the embeddings model yields competetive results to other models like [nomics embeddings model](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5). Shrinking the model to ~50% maintains upto 85% of the performance without further training.
2622
 
2623
  ## Quickstart
2624