GRAG-EMBEDDING-MODELS
Collection
These Models are trained on avemio/GRAG-EMBEDDING-TRIPLES-HESSIAN-AI with roughly 300k Triple-Samples.
•
5 items
•
Updated
This is a sentence-transformers model trained on this Dataset with roughly 300k Triple-Samples. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. It was merged with the Base-Model BAAI/bge-m3 again to maintain performance on other languages again.
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
TASK | BGE-M3 | GRAG-BGE | Merged-BGE | GRAG vs. BGE | Merged vs. BGE |
---|---|---|---|---|---|
AmazonCounterfactualClassification | 0.6908 | 0.5449 | 0.7111 | -14.59% | 2.03% |
AmazonReviewsClassification | 0.4634 | 0.2745 | 0.4571 | -18.89% | -0.63% |
FalseFriendsGermanEnglish | 0.5343 | 0.4777 | 0.5338 | -5.67% | -0.05% |
GermanQuAD-Retrieval | 0.9444 | 0.8714 | 0.9311 | -7.30% | -1.33% |
GermanSTSBenchmark | 0.8079 | 0.7921 | 0.8218 | -1.58% | 1.39% |
MassiveIntentClassification | 0.6575 | 0.4884 | 0.6522 | -16.90% | -0.52% |
MassiveScenarioClassification | 0.7355 | 0.5837 | 0.7381 | -15.19% | 0.25% |
GermanDPR | 0.8265 | 0.7210 | 0.8159 | -10.54% | -1.06% |
MTOPDomainClassification | 0.9121 | 0.7450 | 0.9139 | -16.71% | 0.17% |
MTOPIntentClassification | 0.6808 | 0.4516 | 0.6684 | -22.92% | -1.25% |
PawsXPairClassification | 0.5678 | 0.5077 | 0.5710 | -6.01% | 0.33% |
TASK | BGE-M3 | Merged-BGE | Merged-Snowflake | Merged-BGE vs. BGE | Merged-Snowflake vs. BGE | Merged-Snowflake vs. Merged-BGE |
---|---|---|---|---|---|---|
AmazonCounterfactualClassification | 0.6908 | 0.7111 | 0.7152 | 2.94% | 3.53% | 0.58% |
AmazonReviewsClassification | 0.4634 | 0.4571 | 0.4577 | -1.36% | -1.23% | 0.13% |
FalseFriendsGermanEnglish | 0.5343 | 0.5338 | 0.5378 | -0.09% | 0.66% | 0.75% |
GermanQuAD-Retrieval | 0.9444 | 0.9311 | 0.9456 | -1.41% | 0.13% | 1.56% |
GermanSTSBenchmark | 0.8079 | 0.8218 | 0.8558 | 1.72% | 5.93% | 4.14% |
MassiveIntentClassification | 0.6575 | 0.6522 | 0.6826 | -0.81% | 3.82% | 4.66% |
MassiveScenarioClassification | 0.7355 | 0.7381 | 0.7494 | 0.35% | 1.89% | 1.53% |
GermanDPR | 0.8265 | 0.8159 | 0.8330 | -1.28% | 0.79% | 2.10% |
MTOPDomainClassification | 0.9121 | 0.9139 | 0.9259 | 0.20% | 1.52% | 1.31% |
MTOPIntentClassification | 0.6808 | 0.6684 | 0.7143 | -1.82% | 4.91% | 6.87% |
PawsXPairClassification | 0.5678 | 0.5710 | 0.5803 | 0.56% | 2.18% | 1.63% |
Accuracy is calculated by evaluating if the relevant context is the highest ranking embedding of the whole context array. See Eval-Dataset and Evaluation Code here
Model Name | Accuracy |
---|---|
bge-m3 | 0.8806 |
UAE-Large-V1 | 0.8393 |
GRAG-BGE-M3-TRIPLES-HESSIAN-AI | 0.8857 |
GRAG-BGE-M3-TRIPLES-MERGED-HESSIAN-AI | 0.8866 |
GRAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI | 0.8866 |
GRAG-UAE-LARGE-V1-TRIPLES-HESSIAN-AI | 0.8763 |
GRAG-UAE-LARGE-V1-TRIPLES-MERGED-HESSIAN-AI | 0.8771 |
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("avemio/GRAG-BGE-M3-TRIPLES-MERGED-HESSIAN-AI")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
@misc{bge-m3,
title={BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation},
author={Jianlv Chen and Shitao Xiao and Peitian Zhang and Kun Luo and Defu Lian and Zheng Liu},
year={2024},
eprint={2402.03216},
archivePrefix={arXiv},
primaryClass={cs.CL}
}