Edit model card

SentenceTransformer based on sentence-transformers/stsb-distilbert-base

This is a sentence-transformers model finetuned from sentence-transformers/stsb-distilbert-base on the sentence-transformers/quora-duplicates dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/stsb-distilbert-base-mnrl")
# Run inference
sentences = [
    'Is Cicret a scam?',
    'Is the Cicret Bracelet a scam?',
    'Can you eat only once a day?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Binary Classification

Metric Value
cosine_accuracy 0.816
cosine_accuracy_threshold 0.7867
cosine_f1 0.7286
cosine_f1_threshold 0.7353
cosine_precision 0.6746
cosine_recall 0.7919
cosine_ap 0.7731
dot_accuracy 0.807
dot_accuracy_threshold 150.9795
dot_f1 0.7224
dot_f1_threshold 137.3444
dot_precision 0.6641
dot_recall 0.7919
dot_ap 0.7492
manhattan_accuracy 0.81
manhattan_accuracy_threshold 195.8866
manhattan_f1 0.7246
manhattan_f1_threshold 237.6859
manhattan_precision 0.6293
manhattan_recall 0.854
manhattan_ap 0.7611
euclidean_accuracy 0.81
euclidean_accuracy_threshold 8.7739
euclidean_f1 0.7261
euclidean_f1_threshold 10.8438
euclidean_precision 0.6281
euclidean_recall 0.8602
euclidean_ap 0.7612
max_accuracy 0.816
max_accuracy_threshold 195.8866
max_f1 0.7286
max_f1_threshold 237.6859
max_precision 0.6746
max_recall 0.8602
max_ap 0.7731

Paraphrase Mining

Metric Value
average_precision 0.5349
f1 0.5395
precision 0.5175
recall 0.5635
threshold 0.762

Information Retrieval

Metric Value
cosine_accuracy@1 0.9646
cosine_accuracy@3 0.9926
cosine_accuracy@5 0.9956
cosine_accuracy@10 0.9986
cosine_precision@1 0.9646
cosine_precision@3 0.4293
cosine_precision@5 0.2754
cosine_precision@10 0.1452
cosine_recall@1 0.8301
cosine_recall@3 0.9609
cosine_recall@5 0.9808
cosine_recall@10 0.9935
cosine_ndcg@10 0.9795
cosine_mrr@10 0.979
cosine_map@100 0.9718
dot_accuracy@1 0.9574
dot_accuracy@3 0.9876
dot_accuracy@5 0.9924
dot_accuracy@10 0.9978
dot_precision@1 0.9574
dot_precision@3 0.4257
dot_precision@5 0.2737
dot_precision@10 0.1447
dot_recall@1 0.8238
dot_recall@3 0.9538
dot_recall@5 0.9764
dot_recall@10 0.9918
dot_ndcg@10 0.9741
dot_mrr@10 0.9731
dot_map@100 0.9646

Training Details

Training Dataset

sentence-transformers/quora-duplicates

  • Dataset: sentence-transformers/quora-duplicates at 451a485
  • Size: 100,000 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 13.85 tokens
    • max: 42 tokens
    • min: 6 tokens
    • mean: 13.65 tokens
    • max: 44 tokens
    • min: 4 tokens
    • mean: 14.76 tokens
    • max: 64 tokens
  • Samples:
    anchor positive negative
    Why in India do we not have one on one political debate as in USA? Why cant we have a public debate between politicians in India like the one in US? Can people on Quora stop India Pakistan debate? We are sick and tired seeing this everyday in bulk?
    What is OnePlus One? How is oneplus one? Why is OnePlus One so good?
    Does our mind control our emotions? How do smart and successful people control their emotions? How can I control my positive emotions for the people whom I love but they don't care about me?
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

sentence-transformers/quora-duplicates

  • Dataset: sentence-transformers/quora-duplicates at 451a485
  • Size: 1,000 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 13.84 tokens
    • max: 43 tokens
    • min: 6 tokens
    • mean: 13.8 tokens
    • max: 38 tokens
    • min: 6 tokens
    • mean: 14.71 tokens
    • max: 56 tokens
  • Samples:
    anchor positive negative
    Which programming language is best for developing low-end games? What coding language should I learn first for making games? I am entering the world of video game programming and want to know what language I should learn? Because there are so many languages ​​I do not know which one to start with. Can you recommend a language that's easy to learn and can be used with many platforms?
    Was it appropriate for Meryl Streep to use her Golden Globes speech to attack Donald Trump? Should Meryl Streep be using her position to attack the president? Why did Kelly Ann Conway say that Meryl Streep incited peoples worst feelings?
    Where can I found excellent commercial fridges in Sydney? Where can I found impressive range of commercial fridges in Sydney? What is the best grocery delivery service in Sydney?
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: False
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: None
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss cosine_map@100 quora-duplicates-dev_average_precision quora-duplicates_max_ap
0 0 - - 0.9245 0.4200 0.6890
0.0640 100 0.2535 - - - -
0.1280 200 0.1732 - - - -
0.1599 250 - 0.1021 0.9601 0.5033 0.7342
0.1919 300 0.1465 - - - -
0.2559 400 0.1186 - - - -
0.3199 500 0.1159 0.0773 0.9653 0.5247 0.7453
0.3839 600 0.1088 - - - -
0.4479 700 0.0993 - - - -
0.4798 750 - 0.0665 0.9666 0.5264 0.7655
0.5118 800 0.0952 - - - -
0.5758 900 0.0799 - - - -
0.6398 1000 0.0855 0.0570 0.9709 0.5391 0.7717
0.7038 1100 0.0804 - - - -
0.7678 1200 0.073 - - - -
0.7997 1250 - 0.0513 0.9719 0.5329 0.7662
0.8317 1300 0.0741 - - - -
0.8957 1400 0.0699 - - - -
0.9597 1500 0.0755 0.0476 0.9718 0.5349 0.7731

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.039 kWh
  • Carbon Emitted: 0.015 kg of CO2
  • Hours Used: 0.169 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 3.0.0.dev0
  • Transformers: 4.41.0.dev0
  • PyTorch: 2.3.0+cu121
  • Accelerate: 0.26.1
  • Datasets: 2.18.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
9
Safetensors
Model size
66.4M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tomaarsen/stsb-distilbert-base-mnrl

Finetuned
(6)
this model

Evaluation results