Edit model card

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("enochlev/xlm-similarity")
# Run inference
sentences = [
    'Okay, and can you provide me your full name please.',
    'You. Okay, so for this one, how do you how do you normally use your mobile data.',
    'You. Okay, so for this one, how do you how do you normally use your mobile data.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.5177
spearman_cosine 0.2604
pearson_manhattan 0.5608
spearman_manhattan 0.2596
pearson_euclidean 0.5641
spearman_euclidean 0.2604
pearson_dot 0.5177
spearman_dot 0.2604
pearson_max 0.5641
spearman_max 0.2604

Semantic Similarity

Metric Value
pearson_cosine 0.4586
spearman_cosine 0.2473
pearson_manhattan 0.5059
spearman_manhattan 0.2467
pearson_euclidean 0.5061
spearman_euclidean 0.2473
pearson_dot 0.4586
spearman_dot 0.2473
pearson_max 0.5061
spearman_max 0.2473

Training Details

Training Dataset

Unnamed Dataset

  • Size: 7,960 training samples
  • Columns: text1, text2, and label
  • Approximate statistics based on the first 1000 samples:
    text1 text2 label
    type string string float
    details
    • min: 5 tokens
    • mean: 21.6 tokens
    • max: 66 tokens
    • min: 13 tokens
    • mean: 28.35 tokens
    • max: 71 tokens
    • min: 0.2
    • mean: 0.22
    • max: 1.0
  • Samples:
    text1 text2 label
    Hello, welcome to O2. My name is __ How can I help you today? Thank you for calling over to my name is how can I help you. 1.0
    Hello, welcome to O2. My name is __ How can I help you today? So, I'd look into our accessory so for the airbags the one that we have an ongoing promotion right now for the accessories is the airport second generation. So you can. And either by there's like a great if you want to or I can also make it as an instalment for you. If you want to. 0.2
    Hello, welcome to O2. My name is __ How can I help you today? So on that's something that you can they get that the shop and it's at a renewal for our insurance. So just in case like once you get back to the UK and you don't want to have the insurance anymore. You can possibly remove that. That and the full garbage insurance. 0.2
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 1,980 evaluation samples
  • Columns: text1, text2, and label
  • Approximate statistics based on the first 1000 samples:
    text1 text2 label
    type string string float
    details
    • min: 7 tokens
    • mean: 39.04 tokens
    • max: 256 tokens
    • min: 13 tokens
    • mean: 28.35 tokens
    • max: 71 tokens
    • min: 0.2
    • mean: 0.22
    • max: 1.0
  • Samples:
    text1 text2 label
    Right perfect. Thank you for passenger security cyber. Now let me go ahead. Then I look for your option to do an upgrade. So you had mentioned that you're wanting to get an upgrade. Can you tell me is it for a devise or a single plan. Are you planning to get a new sim only plan or a new phone? 1.0
    Right perfect. Thank you for passenger security cyber. Now let me go ahead. Then I look for your option to do an upgrade. So you had mentioned that you're wanting to get an upgrade. Can you tell me is it for a devise or a single plan. So, I'd look into our accessory so for the airbags the one that we have an ongoing promotion right now for the accessories is the airport second generation. So you can. And either by there's like a great if you want to or I can also make it as an instalment for you. If you want to. 0.2
    Right perfect. Thank you for passenger security cyber. Now let me go ahead. Then I look for your option to do an upgrade. So you had mentioned that you're wanting to get an upgrade. Can you tell me is it for a devise or a single plan. So on that's something that you can they get that the shop and it's at a renewal for our insurance. So just in case like once you get back to the UK and you don't want to have the insurance anymore. You can possibly remove that. That and the full garbage insurance. 0.2
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Validation Loss sts_dev_spearman_max
4.0 128 0.4041 0.2604
1.0 32 0.6357 0.2473

Framework Versions

  • Python: 3.11.9
  • Sentence Transformers: 3.2.1
  • Transformers: 4.45.2
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.1.1
  • Datasets: 3.1.0
  • Tokenizers: 0.20.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CoSENTLoss

@online{kexuefm-8847,
    title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
    author={Su Jianlin},
    year={2022},
    month={Jan},
    url={https://kexue.fm/archives/8847},
}
Downloads last month
17
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for enochlev/xlm-similarity

Finetuned
(165)
this model

Evaluation results