{MODEL_NAME}
This is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
pip install -U sentence-transformers
Then you can use the model like this:
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
Evaluation Results
For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net
Training
The model was trained with the parameters:
DataLoader:
torch.utils.data.dataloader.DataLoader
of length 1196 with parameters:
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
Loss:
sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss
with parameters:
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
Parameters of the fit()-Method:
{
"epochs": 5,
"evaluation_steps": 50,
"evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 598,
"weight_decay": 0.01
}
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
Citing & Authors
- Downloads last month
- 1,814
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Spaces using jamesgpt1/sf_model_e5 7
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported70.851
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported33.779
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported64.970
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported91.809
- ap on MTEB AmazonPolarityClassificationtest set self-reported88.230
- f1 on MTEB AmazonPolarityClassificationtest set self-reported91.786
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported48.942
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported47.912
- map_at_1 on MTEB ArguAnatest set self-reported39.616
- map_at_10 on MTEB ArguAnatest set self-reported55.938