srikarvar's picture
Add new SentenceTransformer model.
c4fc57e verified
|
raw
history blame
20.2 kB
metadata
base_model: intfloat/multilingual-e5-small
datasets: []
language: []
library_name: sentence-transformers
metrics:
  - cosine_accuracy
  - dot_accuracy
  - manhattan_accuracy
  - euclidean_accuracy
  - max_accuracy
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:546
  - loss:TripletLoss
widget:
  - source_sentence: How to cook a turkey?
    sentences:
      - How to make a turkey sandwich?
      - World's biggest desert by area
      - Steps to roast a turkey
  - source_sentence: What is the best way to learn a new language?
    sentences:
      - Author of the play 'Hamlet'
      - What is the fastest way to travel?
      - How can I effectively learn a new language?
  - source_sentence: Who wrote 'To Kill a Mockingbird'?
    sentences:
      - Who wrote 'The Great Gatsby'?
      - How can I effectively save money?
      - Author of 'To Kill a Mockingbird'
  - source_sentence: Who was the first person to climb Mount Everest?
    sentences:
      - Steps to visit the Great Wall of China
      - Who was the first person to climb K2?
      - First climber to reach the summit of Everest
  - source_sentence: What is the capital city of Canada?
    sentences:
      - First circumnavigator of the globe
      - What is the capital of Canada?
      - What is the capital city of Australia?
model-index:
  - name: SentenceTransformer based on intfloat/multilingual-e5-small
    results:
      - task:
          type: triplet
          name: Triplet
        dataset:
          name: triplet validation
          type: triplet-validation
        metrics:
          - type: cosine_accuracy
            value: 0.9672131147540983
            name: Cosine Accuracy
          - type: dot_accuracy
            value: 0.03278688524590164
            name: Dot Accuracy
          - type: manhattan_accuracy
            value: 0.9672131147540983
            name: Manhattan Accuracy
          - type: euclidean_accuracy
            value: 0.9672131147540983
            name: Euclidean Accuracy
          - type: max_accuracy
            value: 0.9672131147540983
            name: Max Accuracy

SentenceTransformer based on intfloat/multilingual-e5-small

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-small. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: intfloat/multilingual-e5-small
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("srikarvar/multilingual-e5-small-triplet-final-1")
# Run inference
sentences = [
    'What is the capital city of Canada?',
    'What is the capital of Canada?',
    'What is the capital city of Australia?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9672
dot_accuracy 0.0328
manhattan_accuracy 0.9672
euclidean_accuracy 0.9672
max_accuracy 0.9672

Training Details

Training Dataset

Unnamed Dataset

  • Size: 546 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 6 tokens
    • mean: 10.78 tokens
    • max: 22 tokens
    • min: 4 tokens
    • mean: 9.52 tokens
    • max: 19 tokens
    • min: 6 tokens
    • mean: 10.75 tokens
    • max: 22 tokens
  • Samples:
    anchor positive negative
    What is the capital of Brazil? Capital city of Brazil What is the capital of Argentina?
    How do I install Python on my computer? How do I set up Python on my PC? How do I uninstall Python on my computer?
    How do I apply for a credit card? How do I get a credit card? How do I cancel a credit card?
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 0.7
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 61 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 7 tokens
    • mean: 10.66 tokens
    • max: 16 tokens
    • min: 5 tokens
    • mean: 9.43 tokens
    • max: 14 tokens
    • min: 6 tokens
    • mean: 10.54 tokens
    • max: 17 tokens
  • Samples:
    anchor positive negative
    How to create a podcast? Steps to start a podcast How to create a vlog?
    How many states are there in the USA? Total number of states in the United States How many provinces are there in Canada?
    What is the population of India? How many people live in India? What is the population of China?
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 0.7
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 2
  • learning_rate: 5e-06
  • weight_decay: 0.01
  • num_train_epochs: 20
  • lr_scheduler_type: cosine
  • warmup_steps: 50
  • load_best_model_at_end: True
  • optim: adamw_torch_fused

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 2
  • eval_accumulation_steps: None
  • learning_rate: 5e-06
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 20
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 50
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss triplet-validation_max_accuracy
0.5714 10 0.6735 - -
0.9714 17 - 0.6198 -
1.1429 20 0.6596 - -
1.7143 30 0.6357 - -
2.0 35 - 0.5494 -
2.2857 40 0.596 - -
2.8571 50 0.5587 - -
2.9714 52 - 0.4479 -
3.4286 60 0.5265 - -
4.0 70 0.4703 0.3363 -
4.5714 80 0.4269 - -
4.9714 87 - 0.2414 -
5.1429 90 0.3725 - -
5.7143 100 0.3438 - -
6.0 105 - 0.1711 -
6.2857 110 0.3058 - -
6.8571 120 0.2478 - -
6.9714 122 - 0.1365 -
7.4286 130 0.2147 - -
8.0 140 0.1971 0.1224 -
8.5714 150 0.1946 - -
8.9714 157 - 0.1111 -
9.1429 160 0.1516 - -
9.7143 170 0.1663 - -
10.0 175 - 0.1049 -
10.2857 180 0.1534 - -
10.8571 190 0.1684 - -
10.9714 192 - 0.1027 -
11.4286 200 0.1422 - -
12.0 210 0.1354 0.1007 -
12.5714 220 0.1407 - -
12.9714 227 - 0.0990 -
13.1429 230 0.154 - -
13.7143 240 0.1359 - -
14.0 245 - 0.0975 -
14.2857 250 0.1397 - -
14.8571 260 0.1389 - -
14.9714 262 - 0.0969 -
15.4286 270 0.15 - -
16.0 280 0.1273 0.0966 -
16.5714 290 0.1318 - -
16.9714 297 - 0.0966 -
17.1429 300 0.1276 - -
17.7143 310 0.1381 - -
18.0 315 - 0.0966 -
18.2857 320 0.1284 - -
18.8571 330 0.1394 - -
18.9714 332 - 0.0965 -
19.4286 340 0.1407 0.0965 0.9672
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.32.1
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification}, 
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}